The pace of change happening in data engineering seems to be accelerating. I'm grappling with this both as a product marketer trying to understand how the tools Snowflake are building are replacing routine tasks and then the swift adoption of AI to accelerate coding tasks. In this blog post, you'll see how I'm starting to see: ✅ Cortex Code: Building production-grade pipelines with simple prompts. ✅ Dynamic Tables: How companies like Travelpass are seeing huge boosts in productivity. ✅ dbt projects on Snowflake: Running dbt natively for a unified dev experience. Check out the post and let me know your thoughts: https://lnkd.in/gnMvEC3M #DataPlatform #ModernDataStack #dbt #DataQuality #SnowflakeDB
Accelerating Data Engineering with Snowflake and AI
More Relevant Posts
-
We help organizations design, build and scale their Databricks data platforms. 🤖 𝗔𝗜-𝗱𝗿𝗶𝘃𝗲𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 continues to evolve rapidly. With the introduction of 𝗗𝗮𝘁𝗮𝗯𝗿𝗶𝗰𝗸𝘀 𝗚𝗲𝗻𝗶𝗲 𝗖𝗼𝗱𝗲, we’re seeing another major leap forward. Rather than simply assisting, Genie Code actively collaborates across the entire data lifecycle, from data exploration and feature engineering to pipeline development and dashboarding. That’s why we’re excited to share our latest 𝗗𝗮𝘁𝗮 𝗦𝗻𝗮𝗰𝗸 on this topic 🍿. Curious about what this could mean for your organization? Let’s connect!
To view or add a comment, sign in
-
Our "Modern Data Workflows with Databricks" course is now live on Analyst Builder! Databricks is one of the most in-demand platforms in data right now, but getting started can feel overwhelming. In this course, I'm going to walk you through everything you need to know to actually work in Databricks. Things like: - Navigating the Databricks UI and understanding how everything connects - Building dashboards and sharing them with stakeholders - Creating ELT pipelines from raw data to clean, ready-to-use tables - Scheduling and automating your workflows with Jobs - Using AI features like Genie Spaces and AI Agents Whether you're trying to break into data engineering or you're an analyst looking to level up, this course will get you there! Check it out here: https://lnkd.in/e_8sAxkJ
To view or add a comment, sign in
-
-
Today I'm launching my latest course "Modern Data Workflows with Databricks" on AnalystBuilder.com! Databricks is one of the most in-demand platforms in data right now, but getting started can feel overwhelming. In this course, I'm going to walk you through everything you need to know to actually work in Databricks. Things like: - Navigating the Databricks UI and understanding how everything connects - Building dashboards and sharing them with stakeholders - Creating ELT pipelines from raw data to clean, ready-to-use tables - Scheduling and automating your workflows with Jobs - Using AI features like Genie Spaces and AI Agents Whether you're trying to break into data engineering or you're an analyst looking to level up, this course will get you there! Check it out here: https://lnkd.in/eDRWy5fF
To view or add a comment, sign in
-
"Modern data stack" is one of the most overused phrases in tech. Most companies that say they have one , don't. They have tools. Bought at different times. By different people. For different reasons. No architecture connecting them. The actual modern data stack has 4 layers, built in a specific order: → Ingestion — raw data lands clean, untouched (Fivetran / Airbyte) → Transformation — dbt tests and documents every model before it's used → Semantic layer — one definition per metric, enforced everywhere (Cube Dev) → Visualisation — dashboards nobody argues about (Sigma / Superset) Skip a layer and everything above it is unstable. We've seen teams add a 5th BI tool hoping it'll fix trust in their numbers. It never does. The problem is always a layer below. How many of these 4 layers does your company have fully built? Day 8 of 30 · #FromBrokenToBuilt · warehows.ai
To view or add a comment, sign in
-
AI products don't behave like traditional apps. And that breaks most data setups. Tabnine runs continuously inside developer workflows, generating hundreds of events per user every hour. But their in-house pipelines couldn't keep up. Data was fragmented, identities were split across environments, and teams lacked visibility into real usage. With RudderStack, Tabnine rebuilt their data foundation around a warehouse-first architecture. Events are collected once, transformed for consistency and compliance, and delivered to Snowflake and downstream tools. In the words of Nimrod Astarhan from Tabnine's engineering team: "The best thing about our data infrastructure is that you rarely hear about it. It just works." Now every team works from the same trusted view of developer behavior. Get the full story. Link in comments ⬇️
To view or add a comment, sign in
-
-
Snowflake’s Cortex Code 🤝 Omni’s Agent Omni was built from the start to leverage the advancements in data infrastructure. We’re integrated deeply into your data engineering workflow in Snowflake. Peter Whitehead built new Cortex Code skills that connect Snowflake and Omni end to end. In addition to leveraging all of your logic defined in Snowflake in Omni, data engineers can now convert Omni Topics into Snowflake semantic views. 1:1 pairing between what you're building in Omni and what you're working on in Snowflake. All of your organization can use any Omni interface (AI, SQL, Excel, point and click) to access data and define semantic context. Now you can have a bi-directional integration into Snowflake enabled by AI. Thanks so much Sridhar Ramaswamy, Josh Klahr, Baris Gultekin, Christian Kleinerman and the Snowflake team for your partnership. Demo below 👇
To view or add a comment, sign in
-
Not only can analysts self-serve however they want in Omni - engineers can now convert Omni Topics directly back into Snowflake semantic views. Your business metrics and logic, locked in, bi-directional. The Snowflake + Omni integration just got a whole lot more powerful !!!
Snowflake’s Cortex Code 🤝 Omni’s Agent Omni was built from the start to leverage the advancements in data infrastructure. We’re integrated deeply into your data engineering workflow in Snowflake. Peter Whitehead built new Cortex Code skills that connect Snowflake and Omni end to end. In addition to leveraging all of your logic defined in Snowflake in Omni, data engineers can now convert Omni Topics into Snowflake semantic views. 1:1 pairing between what you're building in Omni and what you're working on in Snowflake. All of your organization can use any Omni interface (AI, SQL, Excel, point and click) to access data and define semantic context. Now you can have a bi-directional integration into Snowflake enabled by AI. Thanks so much Sridhar Ramaswamy, Josh Klahr, Baris Gultekin, Christian Kleinerman and the Snowflake team for your partnership. Demo below 👇
To view or add a comment, sign in
-
I started wondering, how far can AI go in the data world? Could it ever replace data engineers, analysts, or architects? In just one week, I built a very junior data bot. It can run basic validations from documentation and even extract data lineage from Snowflake. The takeaway? AI won’t replace senior-level expertise anytime soon, but it can speed up their work by 5x or more. Curious about what’s already out there, I explored the market and found something impressive: Upriver automates enterprise data engineering with AI-driven pipelines. The future of data is already here.
To view or add a comment, sign in
-
-
Manual data processes are the enemy of global growth. Navan is a prime example of what happens when you bridge the gap between raw data and confident, real-time insights. By empowering their finance teams with self-service analytics—built on a bedrock of automated governance—they’ve scaled their impact without increasing the manual burden on their data engineers. The secret? Shifting the mindset from "Report Builder" to "Platform Builder." Check out how Bhuvan Bhatia and the Navan team utilize ThoughtSpot, Snowflake, dbt Labs, Atlan, and Monte Carlo to stay compliant and scalable: https://bit.ly/4e2ysGU
To view or add a comment, sign in
-
-
"Report builders to platform builders." That one line from Bhuvan Bhatia captures what every data team is chasing right now. Navan runs billions in annual spend across 1,000+ customers on Snowflake, dbt, and ThoughtSpot. They had Tableau. They replaced it. Not because it was broken — but because their business users needed to "TALK" to the data themselves, on top of governed definitions, without filing a ticket and waiting days. The semantic layer was the unlock. Define metrics once. Govern always. Let every team explore on their own terms. If your data team is still the bottleneck between the question and the answer, this is worth 5 minutes.
Manual data processes are the enemy of global growth. Navan is a prime example of what happens when you bridge the gap between raw data and confident, real-time insights. By empowering their finance teams with self-service analytics—built on a bedrock of automated governance—they’ve scaled their impact without increasing the manual burden on their data engineers. The secret? Shifting the mindset from "Report Builder" to "Platform Builder." Check out how Bhuvan Bhatia and the Navan team utilize ThoughtSpot, Snowflake, dbt Labs, Atlan, and Monte Carlo to stay compliant and scalable: https://bit.ly/4e2ysGU
To view or add a comment, sign in
-