Cloud Software & Data Engineer Job at Schlumberger, Houston, TX

alhlaUFkOFZZeUFRUU1kRUF2dFNFUHJORnc9PQ==
  • Schlumberger
  • Houston, TX

Job Description

A Cloud Software & Data Engineer is responsible for developing data engineering applications using third-party and in-house frameworks, leveraging a broad set of development skills that cover data engineering, data accessibility skillsets. The Cloud Software & Data Engineer is responsible for the complete software lifecycle - analysis, design, development, testing, implementation and support, as well as troubleshooting issues, deployment/upgrade of services and associated data, performance tuning and other maintenance work. This specific type of cloud developer will focus on additional items: data engineering (large scale data transformation and manipulation, ETL, etc.), as well as infrastructure fine-tuning for optimization purposes. The position reports to the software project manager.

Responsibilities
  • Work with subject matter experts to clarify requirements and use cases.
  • Turn requirements and user stories into functionality via implementation efforts which include: design, build & maintain efficient, reusable, reliable code for high quality software and services, documentation and traceability.
  • Develop server-side services to be elastically scalable and secure by design to support high volume & high velocity data processing. Services should be backward and forward compatible to ease deployment.
  • Ensure the solution is deployable, operable, and secure.
  • Write and maintain provisioning, deployment, CI/CD and maintenance scripts for services they developed.
  • Write Unit Tests, Automation testing, Data Simulations.
  • Support, maintain, troubleshoot and fine-tune working cloud environments and the software run within.
  • Builds prototypes, products and systems that meets the project quality standards and requirements.
  • Be an individual contributor which includes technical leadership and documentation to developers and stakeholders.
  • Provide timely corrective actions on all assigned defects and issues.
  • Contributes to development plan by providing task estimates.
  • Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
  • Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.
  • REQUIREMENTS
  • Bachelor's degree or higher in Computer Science or related with minimum 5 years working experience.
  • 5+ years of software development experience in Big Data technologies (Spark Database & Data Lakes).
  • SQL, No-SQL, JSON, CSV, Parquet data type experience.
  • Most Importantly - Hands on experience building scalable data pipelines using Python & PySpark
  • Advanced knowledge of large-scale parallel computing engines (Spark) - provisioning, deployment, development of computing pipelines, operation and support with performance tuning (3y+).
  • Good experience in building/tuning Spark pipelines in Python. (take out)
  • Good Programming experience with Core Python.
  • Design, build and maintain data processing pipelines in Apache NiFi, Spark Jobs.
  • Extensive knowledge of data structures, patterns and algorithms (5y+).
  • Expertise with several back-end development languages and their associated frameworks like Python (3y+).
  • In-depth knowledge of application, cloud networking and security as well as related development best-practices and patterns (3y+).
  • Advanced knowledge of containerization and virtualization (Kubernetes), as well as scaling clusters & debugging issues on high volume/velocity data jobs and best practices (3y+).
  • Good experience in Spark, Databricks on Kubernetes.
  • Cloud platform knowledge - Azure public cloud expertise (3y+).
  • Advanced knowledge of DevOps, CI/CD and cloud deployment practices (5y+).
  • Advanced skills in setting up and operating databases (relational and non-relational) (3y+)
  • Experienced in application profiling, bottleneck analysis and performance tuning.
  • Effective communication and cross functional skills.
  • Problem solving skills, Team player, adaptable & quick worker.
  • Have worked in highly Agile projects in the past.
  • Bachelor's degree or higher in Computer Science or related with minimum 5 years working experience.
  • 5+ years of software development experience in Big Data technologies (Spark Database & Data Lakes).
  • SQL, No-SQL, JSON, CSV, Parquet data type experience.
  • Advanced knowledge of large-scale parallel computing engines (Spark) - provisioning, deployment, development of computing pipelines, operation and support with performance tuning (3y+).
  • Good experience in building/tuning Spark pipelines in Python.
  • Good Programming experience with Python.
  • Design, build and maintain data processing pipelines in Apache NiFi, Spark Jobs.
  • Extensive knowledge of data structures, patterns and algorithms (5y+).
  • Expertise with several back-end development languages and their associated frameworks like Python (3y+).
  • In-depth knowledge of application, cloud networking and security as well as related development best-practices and patterns (3y+).
  • Advanced knowledge of containerization and virtualization (Kubernetes), as well as scaling clusters & debugging issues on high volume/velocity data jobs and best practices (3y+).
  • Good experience in Spark, Databricks on Kubernetes.
  • Cloud platform knowledge - Azure public cloud expertise (3y+).
  • Advanced knowledge of DevOps, CI/CD and cloud deployment practices (5y+).
  • Advanced skills in setting up and operating databases (relational and non-relational) (3y+)
  • Experienced in application profiling, bottleneck analysis and performance tuning.
  • Effective communication and cross functional skills.
  • Problem solving skills, Team player, adaptable & quick worker.
  • Have worked in highly Agile projects in the past.

Job Tags

Work experience placement,

Similar Jobs

PrimeLine Utility Services

Coax & Fiber Splicer Job at PrimeLine Utility Services

Coax & Fiber SplicerQualified candidates have demonstrated experience in underground or aerial coax and fiber construction, with coax splicing expertise. Must have a clean driving record with a Class A CDL license. Aerial lineman must be able to safely operate a boom... 

Hanna Interpreting Services LLC

Arabic Interpreter Job at Hanna Interpreting Services LLC

 ...Hanna Interpreting Services LLC is a language service provider that connects bilingual and multilingual individuals with potential opportunities...  .... Previous interpreting experience, preferably in medical, legal, or educational settings. Demonstrated professionalism... 

Talula's Garden

Barista/Gourmet Market Staff Job at Talula's Garden

 ...Restaurants,Talula's Daily is equal parts all-day cafe and local market. Open every day from 8AM - 6PM, the market/cafe is serving as...  ...p repare coffee, espresso, and tea drinks; accurately input food and beverage orders; process payments through ALOHA POS system... 

Knight Enterprises

Fiber Splicer Job at Knight Enterprises

 ...Construction Group, we take a people-first approach to construction. We know communications infrastructure construction isnt just about fiber and towers. Its about collaboration and honesty. At Lightspeed, a Full Circle Fiber Partners company, we bring more than steel toes... 

Mid-Columbia Center for Living

Peer Support Specialist Job at Mid-Columbia Center for Living

 ...Working Title: Peer Support Specialist Program: Community Support Services- The Cottage Pay Range:$17.51-19.31 Hourly Location: The Dalles, Oregon (This position will serve all three counties: Hood River, Wasco, and Sherman County) FTE: 1.0 FTE (37.5 hours per...