job summary:
This role requires a highly skilled Databricks Platform Administrator responsible for hands-on management, optimization, and maintenance of Databricks environments on AWS. The ideal candidate will have extensive experience in data engineering, programming, and cloud-based integration platforms, ensuring seamless data flow and interoperability between various systems and applications within our organization.
location: Dallas, Texas
job type: Contract
salary: $75 - 79 per hour
work hours: 8am to 6pm
education: Bachelors
responsibilities:
Primary Responsibilities:
- Build/Design large-scale application development projects and programs with a hands-on approach.
- Ensure the technical validity of solutions and actively drive their implementation.
- Develop and maintain detailed business and technical process documentation and training materials and build/code frameworks
- Review problem logs, identify recurring issues, implement long-term solutions, and automate solutions
- Hands-on development, admin, design, and performance tuning
qualifications:
Minimum Qualifications:
- 5+ years of hands-on experience with a BS or MS in Computer Science or equivalent education and experience.
- 3+ years of hands-on experience in framework development and building integration layers to solve complex business use cases, with a strong emphasis on Databricks and AWS.
Technical Skills:
- Strong hands-on coding skills in Python.
- Extensive hands-on experience with Databricks for developing integration layer solutions.
- AWS Data Engineer or Machine Learning certification or equivalent hands-on experience with AWS Cloud services.
- Proficiency in building data frameworks on AWS, including hands-on experience with tools like AWS Lambda, AWS Glue, AWS SageMaker, and AWS Redshift.
- Hands-on experience with cloud-based data warehousing and transformation tools such as Delta Lake Tables, DBT, and Fivetran.
- Familiarity with machine learning and open-source machine learning ecosystems.
- Hands-on experience with integration tools and frameworks such as Apache Camel and MuleSoft.
- Solid understanding of API design principles, RESTful services, and message queuing technologies.
- Familiarity with database systems and SQL.
- Hands-on experience with Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
- Proficiency in setting up and managing Databricks workspaces, including VPC management, security groups, and VPC peering.
- Hands-on experience with CI/CD pipeline management using tools like AWS CodePipeline, Jenkins, or GitHub Actions.
- Knowledge of monitoring and logging tools such as Amazon CloudWatch, Datadog, or Prometheus.
- Hands-on experience with data ingestion and ETL processes using AWS Glue, Databricks Auto Loader, and Informatica.
skills:
- hands-on experience with a BS or MS in Computer Science or equivalent education and experience.
- hands-on experience in framework development and building integration layers to solve complex business use cases, with a strong emphasis on Databricks and AWS.
Technical Skills:
- Strong hands-on coding skills in Python.
- Extensive hands-on experience with Databricks for developing integration layer solutions.
- AWS Data Engineer or Machine Learning certification or equivalent hands-on experience with AWS Cloud services.
- Proficiency in building data frameworks on AWS, including hands-on experience with tools like AWS Lambda, AWS Glue, AWS SageMaker, and AWS Redshift.
- Hands-on experience with cloud-based data warehousing and transformation tools such as Delta Lake Tables, DBT, and Fivetran.
- Familiarity with machine learning and open-source machine learning ecosystems.
- Hands-on experience with integration tools and frameworks such as Apache Camel and MuleSoft.
- Solid understanding of API design principles, RESTful services, and message queuing technologies.
- Familiarity with database systems and SQL.
- Hands-on experience with Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
- Proficiency in setting up and managing Databricks workspaces, including VPC management, security groups, and VPC peering.
- Hands-on experience with CI/CD pipeline management using tools like AWS CodePipeline, Jenkins, or GitHub Actions.
- Knowledge of monitoring and logging tools such as Amazon CloudWatch, Datadog, or Prometheus.
- Hands-on experience with data ingestion and ETL processes using AWS Glue, Databricks Auto Loader, and Informatica.
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact HRsupport@randstadusa.com.
Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including health, an incentive and recognition program, and 401K contribution (all benefits are based on eligibility).
This posting is open for thirty (30) days.