Data Engineer-II
Responsibilities
- Design and build scalable, secure data pipelines for both batch and real-time processing.
- Support data systems that power machine learning, model inference, and analytics.
- Write clean, production-ready code in Java, Scala, or Python.
- Work with tools like Apache Spark, Kafka, Flink, Airflow, AWS EMR, and other AWS-native services.
- Use graph data modeling and graph databases such as Neo4j or Amazon Neptune.
- Optimize data architecture for performance, cost, and ease of maintenance.
Skills
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related technical field.
- 3-5 years of experience building and supporting complex data systems and applications in cloud environments.
- Strong proficiency in Java, Scala or Python.
- Deep knowledge of distributed data processing frameworks (e.g., Spark, Kafka, Flink).
- Hands-on experience with cloud services (AWS) and containerized environments (Docker, Kubernetes).
- Understanding of software design patterns, data structures, and DevOps/CI-CD best practices.
- Experience in working with Airflow or other data pipeline orchestration services.
- Familiarity with building ML data pipelines (e.g., with Databricks, SageMaker, or similar platforms).
- Experience in developing and utilizing scalable, high-performance APIs.
- Additional points for Experience with graph databases and graph algorithm.
Benefits
- 💼 Equal opportunity employer
- 💼 Values diversity
Company
Socure
Location
Worldwide
Published 7 days ago • Expires December 24, 2025 06:01