Houston, TX, USA
Experience using Microsoft Azure technologies such as ADLS, ADF, Azure Data bricks, Azure Synapse, Azure Analysis Services client: American National Insurance Company Duration: Long Term
Location: Houston, TX
Chicago, IL, USA
Chicago IL Duration :Long Term Client: Hartmarx
Remote (San Diego, CA, USA)
Apr 01, 2021
Developing and deploying distributed computing applications using Apache Spark Experience processing large amounts of data using Java Leveraging techniques and practices like Continuous Integration Help drive cross team design development via technical leadership mentoring experience in Java J2ee Strong knowledge of Object Oriented Analysis and Design Software Design Patterns and Java coding principles Experience with Core Java J2EE development Worked on a full lifecycle project implementation of applications with Java apache in Spark.
Location-San Diego CA Duration-Long Term client: Sanyo
Snowflake Developer with experience in SQL Development and Data Analysis required to develop a new complex data warehouse • In-depth knowledge of Azure Cloud services • At least 2 full years of recent Snowflake development experience • Hands-on experience with Snowflake utilities, Snow SQL, Snow Pipe, • Able to administer and monitor Snowflake computing platform • Hands on experience with data load and manage cloud DB • Experience in creation and modification of user accounts and security groups per request • Handling large and complex sets of XML, JSON, and CSV from various sources and databases • Solid grasp of database engineering and design • Experience with any scripting languages, preferably Python Technical Skills: • Snowflake. • Experience in some other SQL based databases, like Teradata, Oracle SQL Server etc.
Location: Plano, TX Duration: Long Term Client: J. C. Penney
Dallas, TX, USA
Azure Data Engineer · Job Location-Dallas, TX · Client: HollyFrontier Role Description: The Azure Data Engineer must have at least 2+ years of experience. Required Experience and Skills: experience working on Data bricks, Azure Data Factory, Azure Functions, Azure data explorer and other Azure data solutions ecosystems (Mandatory) experience working on Spark SQL, Hive SQL, USQL, Kusto Query. (Mandatory). experience working on Spark, Scala and Python. (Mandatory) experience working on ADLS, Cosmos DB, Cassandra DB, Mongo DB, Azure Synapse, Azure SQL Server. experience on creating the frameworks towards building the data pipelines. (Mandatory) - experience on configure the data streams between Event Hub and Azure Service Bus with other integration systems such as Data Bricks etc. (Mandatory) experience working with Onshore / Offshore model. (Mandatory) Azure Fundamentals Certification (AZ-900) and Azure Data Solution (DP-200 & DP-201) (Preferred).
Mount Laurel, NJ, USA
Duration : 12+ months Location : Mount Laurel, NJ Client: VANGUARD Minimum of 7+ years overall IT experience: including 5+ years of web service development and integration experience Responsible for detailed design, development/unit testing and integration of applications Produce scalable and flexible, high-quality code that satisfies both the functional and non-functional requirements Develop configurable software services that support applications integrates to enterprise services experience with Java/J2SE 8 with a deep understanding of the language and core API's, web services, code profiling and optimization Strong working experience in building REST services using Spring Boot Framework Knowledge and experience in developing and deploying micro services and fundamentals of microservice architecture Knowledge of and experience in the implementation of design patterns and creating modular code Working experience with Hibernate, HQL, Spring Data and Spring Security....
McLean, Virginia, USA
Client – Freddie Mac
Leads the implementation, automated unit and integration testing, code reviews, debugging and integration of code of extreme complexity across multiple concurrent projects.
Experience designing and documenting internal and external (commercial) APIs using API documentation frameworks (e.g. Apiary, Swagger)
Strong in programming disciplines like object oriented principles, design patterns, data structures and unit testing (TDD using junit), Domain driven Design (DDD) experience to Cloud computing using AWS such as S3, DynamoDB, SNS, SES, EC2 or Azure experience with databases (Postgres/MySQL/Oracle/NoSQL DB), persistence frameworks, and SQL experience with GitHub, Docker, Kubernetes, CI/CD frameworks (Jenkins) Scrum based software development methodologies
Experience with defining and implementing Non-Functional Requirements (NFR – Security, Performance, Cost etc )
Skillset: Azure Data Factory, Databricks, Azure services, ETL process Duration: Long Term Location: remote client: JOHNSON CONTROLS
Design and implement Microsoft Azure services, including APIs, Event Hub, and Cosmos DB experience with Azure Data Factory (ADF) and Databricks Designing, Deploying and maintaining high level complex ETL processes to perform daily loads to enterprise Data Warehouse. At least, 2+ years of developing services in Azure. Experience building solutions using Azure Event Hub Knowledge of Cosmos DB including querying, design and configuration Hands-on experience in designing and developing high-volume REST APIs. Azure API management
Santa Clara, CA, USA
Jan 25, 2021
Project Manager - - Permanent PM budget is usually 18 - 24L.
Please find the JD for the project manager . Experience – 10+ Years · Scrum/Agile Master · Cloud Automation as IAAC · Knowledge on PowerBI/Amazon Athena/Kinesis · Cost Optimization Dashboard and Analysis · Understanding of Golang/React/GraphQL · PowerBI/Pivot/Macros
Location :Remote Client :charter communication Duration: Long Term
Consultant will be responsible for Analysis of Legacy System On-premise Hadoop data lake objects and structures. Discussion and communication with client for understanding the technical design of data transformation in Talend. Responsible for development, modification of orchestration using Azure Data Factory and Hive queries. Review and optimize ADF data pipelines with focus on usability, performance, flexibility and Standardization. Optimize and validate historical data loads using ADF orchestration and spark jobs.
Location : Bothell, WA ; Plano, TX Client: AT&T ADLS, ADF, CI/CD , pipeline Mandatory 8+ years of Technical High Level and Low Level design in building data pipeline. Design expertise in building data pipeline in Azure Proficiency in Azure data lake gen 2 and Azure cloud proficiency ADLS gen2 development experience Hands Experience with a range of Azure based big data & analytics platforms like ADLS, ADF, Azure Data Warehouse specifically ingestion to cloud storages/Data Lake Data Transformation design for full load and incremental loads in the storage accounts to support analytical workloads Experience with a version control system (e.g., ClearCase, Git) Experience developing in Linux and a virtualized environment Experience working in an Agile environment
Louisville, KY, USA
Location: Louisville, KY (Remote for now) Duration: 12+ Months Contract Client: Humana Essential Skills: Sr. Software Developer • 10+ years of hands-on development experience in writing and debugging codes in one of the following server-side programming languages (Python and Java). • Ability to easily switch between projects and programming languages • 7+ years of experience implementing API Service architectures (SOAP, REST) using any of the market leading API Management tools. • Experience with writing and debugging SQL queries.
Location: Pleasant CA (Remote due to Covid)
Visa: Any visa
Pleasanton, CA, USA
Oct 22, 2020
Duration: Long Term VISA: Any visa Position :full time Interview process: Phone experience in AWS and Big Data technologies such as Spark/Sparks QL with java Client: Autodesk
Pleasanton, CA, USA
Job Description strong programming background with Java/Python/Scala At least 3+ years of experience working on Data Integration projects using Hadoop · 5 years of experience working on the Bigdata/Hadoop platform · strong programming background with Java/Python/Scala · At least 3+ years of experience working on Data Integration projects using Hadoop · Strong development experience Scala & SPARK · Expertise managing large datasets · Strong SQL and UNIX scripting skills. · Experience working in an Agile delivery model · Strong planning and organizational skills · Good working knowledge of Java
Location: Pleasanton CA Client: Facebook/Genentech Duration: Long Term