Skip to content
View vaishnavityra712's full-sized avatar
๐Ÿ 
Working from home
๐Ÿ 
Working from home

Block or report vaishnavityra712

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please donโ€™t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
vaishnavityra712/README.md

Hi there ๐Ÿ‘‹ Vaishnavi Achanta here :) !!

  • ๐ŸŒฑ Iโ€™m currently a part of AMAZON !

  • ๐Ÿ‘ฏ Iโ€™m looking to collaborate on Analysis and deep diving on insights

  • ๐Ÿค” Iโ€™m looking for help with Data warehousing

  • ๐Ÿ’ฌ Ask me about SQL, SupplyChain, Data Wrangling, Data Cleaning, Python, PySpark, DataBricks, AWS S3

  • ๐Ÿ“ซ How to reach me: LinkedIn

  • โšก Motto of life is: Experience & experiment Everything


About Me :) !

:octocat: :octocat:

About Me ๐Ÿ‘พ

Data Engineer with hands-on experience building scalable data pipelines and analytics systems, backed by strong domain expertise in supply chain and logistics.

At Amazon, I work with high-volume EU transportation data, designing ETL pipelines, automating workflows, and building reliable data models that drive operational decisions and cost optimization. My work has directly contributed to improving efficiency, reducing manual effort, and enabling real-time visibility into logistics performance.

๐Ÿ”ง What I do:

Build and optimize ETL pipelines using Python, SQL, and PySpark Design data models for scalable analytics and reporting Monitor data pipelines, troubleshoot failures, and ensure data quality Automate reporting systems and operational workflows

๐Ÿ“Š Key Work:

Built anomaly detection pipelines identifying 60% of logistics issues, improving detection accuracy by 45% Reduced 20+ hours/month of manual reporting through automated data workflows Developed forecasting pipelines using Prophet & XGBoost for production planning

๐Ÿ’ก Focus Areas: Data Engineering โ€ข Distributed Data Systems โ€ข Supply Chain Intelligence โ€ข Applied AI

๐Ÿ“Œ Tech Stack: Python โ€ข SQL โ€ข Apache Spark โ€ข Databricks โ€ข AWS S3 โ€ข ETL

๐Ÿ” What I bring: ๐Ÿ“ฆ Real-world logistics data experience

๐Ÿ“Š KPI dashboards and performance tracking

๐Ÿ”„ Process optimization through data analysis

๐Ÿงฎ Hands-on with Excel, SQL, Power BI & Python (pandas, matplotlib)

๐Ÿš€ Projects (in progress or completed): ๐Ÿ“ˆ Transportation Cost Optimization Dashboard โ€“ Built using Power BI

๐Ÿ—ƒ๏ธ Shipment Delay Analysis โ€“ Python + Excel project on delay trends

๐Ÿ“Š Amazon Empty Asset Movement Case Study โ€“ Operational insights + mock analysis

๐Ÿงฐ SQL Portfolio โ€“ Sample queries from mock logistics datasets

๐Ÿ“š Currently learning: DataBricks

AWS S3, AWS GLUE

Real-world business case analysis

๐ŸŽฏ Goal: To land a Data Engineer role where I can turn operational data into strategic insights.

octocat-1774075104303

Pinned Loading

  1. Production_Forecast_Modeling Production_Forecast_Modeling Public

    A data-driven approach to predict future oil & gas output using historical production, upstream KPIs, and statistical/ML models for better planning and decision-making.

    HTML