close
cancel
Showing results for 
Search instead for 
Did you mean: 
Databricks Platform Discussions
Dive into comprehensive discussions covering various aspects of the Databricks platform. Join the conversation to deepen your understanding and maximize your usage of the Databricks platform.
cancel
Showing results for 
Search instead for 
Did you mean: 

Browse the Community

Data Engineering

Join discussions on data engineering best practices, architectures, and optimization strategies with...

12256 Posts

Data Governance

Join discussions on data governance practices, compliance, and security within the Databricks Commun...

530 Posts

Generative AI

Explore discussions on generative artificial intelligence techniques and applications within the Dat...

382 Posts

Machine Learning

Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithm...

1024 Posts

Warehousing & Analytics

Engage in discussions on data warehousing, analytics, and BI solutions within the Databricks Communi...

684 Posts

Activity in Databricks Platform Discussions

lrm_data
by > New Contributor
  • 89 Views
  • 2 replies
  • 0 kudos

Lakeflow Connect - SQL Server - Issues restarting after failure

Has anyone else run into a situation where a breaking schema change on a SQL Server source table leaves their Lakeflow Connect pipeline in a state it can't recover from — even after destroying and recreating the pipeline?Here's what happened to us:- ...

  • 89 Views
  • 2 replies
  • 0 kudos
Latest Reply
abhi_dabhi
Databricks Partner
  • 0 kudos

Hi @lrm_data yes, this one catches a lot of people. A few things to check on the SQL Server side that commonly block recovery even after destroy + recreate:Stale lakeflow_* capture instance. SQL Server allows only 2 capture instances per table. If bo...

  • 0 kudos
1 More Replies
greengil
by > New Contributor III
  • 132 Views
  • 4 replies
  • 0 kudos

Delta Jira data import to Databricks

We need to import large amount of Jira data into Databricks, and should import only the delta changes.  What's the best approach to do so?  Using the Fivetran Jira connector or develop our own Python scripts/pipeline code?  Thanks.

  • 132 Views
  • 4 replies
  • 0 kudos
Latest Reply
abhi_dabhi
Databricks Partner
  • 0 kudos

Hi @greengil  good question, I went through this something similar recently, so sharing what I found.My instinct was also to build it in Python, but once I dug in, the "just write a script" path hides a lot of pain:Deletions are invisible. Jira's RES...

  • 0 kudos
3 More Replies
vg33
by > New Contributor
  • 66 Views
  • 1 replies
  • 0 kudos

Network Configuration

I have a Databricks workspace on AWS (serverless compute). I created a network policy with "Allow access to all destinations" enabled and attached it to my workspace. When I run a Python notebook and try to make an HTTP request or curl to any externa...

  • 66 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lu_Wang_ENB_DBX
Databricks Employee
  • 0 kudos

Most likely the egress policy change hasn’t actually taken effect on the serverless compute that’s running your notebook. Check these things in order: Verify the network policy itself (Account Console → Security → Networking → Context-based ingress ...

  • 0 kudos
TX-Aggie-00
by > Databricks Partner
  • 55 Views
  • 0 replies
  • 0 kudos

Sharepoint Connector Site Limitation

Hey All!We are trying out the Beta connector for SharePoint and found that the connector will not work at the root-level site.  Is there a reason for this limitation.  It is unfortunately a hard blocker for us to use the native connector.  MUST_START...

  • 55 Views
  • 0 replies
  • 0 kudos
LokeshChikuru
by > Databricks Partner
  • 110 Views
  • 3 replies
  • 1 kudos

Databricks integrating with ServiceNow via Lakeflow Connect for data ingestion

Databricks integrating with ServiceNow via Lakeflow Connect for data ingestion and looking for guidance on enforcing integration-user based data access.Observed behaviourU2M OAuth authentication succeeds when ServiceNow access is granted to the works...

  • 110 Views
  • 3 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, looking through some internal resources, it seems most likely to be down to ServiceNow-side ACLs, High Security Settings, or domain/scope restrictions overriding the admin role on system tables the connector queries.Quick things to check: - Run t...

  • 1 kudos
2 More Replies
gannicus
by > New Contributor
  • 87 Views
  • 1 replies
  • 0 kudos

Databricks CLI token creation fails with “cannot configure default credentials”

Hello, I have been generating a Databricks personal access token in my YAML-based CI pipeline using a bash script. The pipeline installs the Databricks CLI and then creates a token using a Service Principal (Azure AD application) credentials.Current ...

  • 87 Views
  • 1 replies
  • 0 kudos
Latest Reply
emma_s
Databricks Employee
  • 0 kudos

Hi, I'm pretty sure what you're hitting is stricter auth detection in the newer CLI/SDK. Your error shows azure_tenant_id, client_id, and client_secret all populated, so it's seeing more than one credential type and refusing to guess between them. Th...

  • 0 kudos
SlavaPeshkov
by > New Contributor
  • 134 Views
  • 3 replies
  • 0 kudos

Knowledge cutoff for Genie

Hi, I was trying to make a dashboard using AI Genie - it worked well for the basics, but was unable to perform some of the modifications that were obviously (via UI) doable. Per Genie's response below - it only knows Databricks documentation up to Ap...

  • 134 Views
  • 3 replies
  • 0 kudos
Latest Reply
emma_s
Databricks Employee
  • 0 kudos

Hi, Just wanted to add to what others have said and make sure its clear, I think you're trying to use a Genie Space to create your dashboard. But Genie code is probably a better tool here. If you have visuals created in a Genie space that you would l...

  • 0 kudos
2 More Replies
mdee
by > Databricks Partner
  • 90 Views
  • 2 replies
  • 1 kudos

LDP Materialized View Incremental Refreshes - Changeset Size Thresholds

Is there any documentation available around the changeset size thresholds for materialized view incremental refreshes?  Are these configurable at all?  Are they constant or do the thresholds change depending on the number of rows/size of the material...

  • 90 Views
  • 2 replies
  • 1 kudos
Latest Reply
emma_s
Databricks Employee
  • 1 kudos

Hi, On top of Pradeep's reply, which I'd recommend trying, I'd also suggest you raise a support ticket for this. They will potentially be able to tweak the settings in the backend (not guaranteed), but it may help. Thanks,Emma

  • 1 kudos
1 More Replies
DataCurious
by > New Contributor III
  • 20220 Views
  • 26 replies
  • 19 kudos

how do you disable serverless interactive compute for all users

I don't want users using serverless interactive compute for their jobs. how do i disable it for everyone or for specific users

  • 20220 Views
  • 26 replies
  • 19 kudos
Latest Reply
timo2024
Databricks Partner
  • 19 kudos

Btw. I just realized that at least with VNet injected workspaces you probably can prevent any sensible serverless usage by not giving permissions and network route to the needed resources. At least in Azure Databricks, notebooks need access to Databr...

  • 19 kudos
25 More Replies
lrm_data
by > New Contributor
  • 70 Views
  • 1 replies
  • 0 kudos

**Lakeflow Connect SQL Server — Snapshots Firing Outside Configured Full Refresh Window?**

Has anyone else seen full refresh snapshots trigger outside of their configured refresh window in Lakeflow Connect?Here's our situation:- We have a full refresh window configured to restrict snapshot operations to off-hours- On at least one occasion,...

  • 70 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sumit_7
Honored Contributor II
  • 0 kudos

@lrm_data This is very unlike case for the refresh to be triggered outside the configured window. Though I would still suggest to check the Configured Window and Auto Full Refresh policy once to be sure.If still persists, then you may raise a support...

  • 0 kudos
ShivaPolusani
by > New Contributor II
  • 83 Views
  • 2 replies
  • 0 kudos

Claude Desktop connect to Databricks MCP

How do I connect Claude desktop to Databricks connector (available in connectors)What are the steps involved in it,  Can any one provide detailed step by step implementation for this so that I can query the data using Claude desktop please?

  • 83 Views
  • 2 replies
  • 0 kudos
Latest Reply
Sumit_7
Honored Contributor II
  • 0 kudos

@ShivaPolusani This is achieved via MCP remote, using few args such as Workspace link and Token.Check the documentation here - https://docs.databricks.com/aws/en/generative-ai/mcp/connect-external-services?language=Claude+Desktop#pat-examplesVideo Li...

  • 0 kudos
1 More Replies
MyProfile
by > New Contributor
  • 83 Views
  • 1 replies
  • 0 kudos

Disable Public Network Access on Databricks Managed Storage Account - Deny Assignment

Issue Description:I am attempting to disable public network access on the Azure Databricks managed storage account. However, I am encountering the following error:Failed to save resource settings — access is denied due to a deny assignment created by...

  • 83 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sumit_7
Honored Contributor II
  • 0 kudos

@MyProfile This would be helpful, check once - https://learn.microsoft.com/en-us/answers/questions/1707749/managed-storage-accounts-compliance

  • 0 kudos
Raghu_Bindingan
by > New Contributor III
  • 5533 Views
  • 5 replies
  • 2 kudos

Truncate delta live table and try to repopulate it in the pipeline

Has anyone attempted to truncate a delta live gold level table that gets populated via a pipeline and then tried to repopulate it by starting the pipeline. I have this situation wherein i need to reprocess all data in my gold table, so i stopped the ...

  • 5533 Views
  • 5 replies
  • 2 kudos
Latest Reply
sanjivsingh
New Contributor
  • 2 kudos

My Blog on thishttps://medium.com/@singh.sanjiv/truncate-and-load-streaming-live-table-8f840eb424d1

  • 2 kudos
4 More Replies
leopold_cudzik
by > New Contributor II
  • 105 Views
  • 1 replies
  • 0 kudos

Resolved! Lakehouse sync tables over rolling history

Hi,we're exploring replacing one of the use cases we are running in our clour provider with a Databricks pipelines. We currently have explored possibility to subscribe to an eventhub using SDP pipelines, feedding our iot data into a Delta table where...

  • 105 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ashwin_DSA
Databricks Employee
  • 0 kudos

Hi @leopold_cudzik, The pattern you are suggesting is feasible, but it’s much easier to manage if you separate history ingestion from the 7-day serving view instead of cleaning the streaming sink table in place. A common architecture on Databricks wo...

  • 0 kudos
kevinleindecker
by > New Contributor II
  • 362 Views
  • 6 replies
  • 1 kudos

SQL Warehouse error: "Cannot read properties of undefined (reading 'data')" when querying system tab

Queries that previously worked started failing in SQL Warehouse (Dashboards) without any changes on our side.The query succeeds, but fails to render results with error:"Cannot read properties of undefined (reading 'data')"This happens with:- system.b...

  • 362 Views
  • 6 replies
  • 1 kudos
Latest Reply
Esgario
New Contributor II
  • 1 kudos

Same problem here. I have previously reported this issue, and it had been resolved at the time. However, the problem has now reoccurred.When ingesting large tables (over 100k rows), the system is unable to properly render the data, preventing the tab...

  • 1 kudos
5 More Replies