AdvancedLast updated: 2026-04-09 • 3 sections
Expert interview questions on Snowflake Secure Data Sharing, reader accounts, Marketplace listings, cross-cloud sharing, and data clean rooms.
Q: How does Snowflake's Secure Data Sharing work without copying data? Walk through the architecture.
Snowflake sharing works at the metadata layer. The provider creates a SHARE object, grants access to specific databases/schemas/tables/views, and adds consumer accounts. The consumer creates a database FROM SHARE. Under the hood: consumers read the provider's micro-partitions directly from cloud storage — no data is copied, moved, or duplicated. The cloud services layer manages access control, ensuring consumers can only read granted objects. Storage costs remain with the provider. Compute costs are on the consumer (they use their own warehouses to query shared data). This is possible because of Snowflake's architecture: storage is decoupled from compute, and micro-partitions are immutable and self-describing.
Q: What are the differences between direct sharing, listings, and data exchange?
Direct sharing (CREATE SHARE): point-to-point between specific Snowflake accounts. You manage consumer accounts manually. Free, no marketplace involvement. Listings (Snowflake Marketplace): your data appears in the public marketplace for any Snowflake customer to discover and request. Can be free or paid. Includes description, sample queries, and usage documentation. Data Exchange: a private marketplace for a curated group of accounts (e.g., within your enterprise or with select partners). Combines marketplace discoverability with controlled access. Choose direct sharing for known B2B partners, marketplace for public monetization, exchange for enterprise-internal or consortium data sharing.
Q: A consumer needs to share data with you, but they're on a different cloud provider (your org is AWS, they're Azure). How does cross-cloud sharing work?
Cross-cloud/cross-region sharing uses Snowflake's Global Data Sharing infrastructure (also called Auto-Fulfillment or Listings). For direct shares, both accounts must be in the same region on the same cloud — you cannot directly share AWS us-east-1 → Azure West Europe. For listings (Marketplace/Exchange), Snowflake handles replication automatically: the provider publishes a listing, and Snowflake replicates the data to the consumer's region/cloud. The provider pays replication storage and transfer costs. Alternative: use database replication (CREATE REPLICATION GROUP) to replicate to a secondary account in the consumer's region, then share from there. This gives more control over replication frequency and costs.
Q: How do you implement row-level security on shared data so different consumers see different rows?
Use secure views with CURRENT_ACCOUNT() or a mapping table. Pattern: (1) Create a mapping table: CREATE TABLE share_access_control (account_locator VARCHAR, allowed_region VARCHAR). (2) Create a secure view: CREATE SECURE VIEW shared_sales AS SELECT * FROM sales WHERE region IN (SELECT allowed_region FROM share_access_control WHERE account_locator = CURRENT_ACCOUNT()). (3) Share the secure view (not the base table). Each consumer's queries automatically filter to their allowed rows based on their account locator. Important: the view MUST be SECURE — regular views expose their definition in GET_DDL(), which would reveal the base table and filtering logic to consumers.
Q: What is a reader account and when would you create one?
A reader account is a Snowflake account created and managed by the provider, intended for consumers who don't have their own Snowflake account. The provider pays for the reader account's compute (warehouses), storage (minimal — only metadata), and credits. Use cases: (1) sharing data with non-Snowflake customers (they get a limited Snowflake UI). (2) Providing read-only data access to external analysts or regulators. (3) Paid data products where you want to control the compute environment. Limitations: reader accounts cannot create their own databases, shares, or load data — they're strictly read-only consumers of your shared data. For cost control, set resource monitors on reader account warehouses.
Q: What is a Snowflake Data Clean Room and how does it differ from regular data sharing?
A Data Clean Room enables two or more parties to run joint analyses on combined data without either party seeing the other's raw records. Built on top of secure data sharing + secure UDFs + row access policies. The flow: Party A shares a secure view with aggregation-only policies. Party B runs approved queries (via stored procedures) that join both datasets and return only aggregate results (e.g., "overlap of your customers with mine is 15%"). Neither party can run SELECT * on the other's data — the secure UDFs enforce that only approved computations return results. Use cases: advertising measurement (match ad impressions with conversions), healthcare research (combine patient cohorts without sharing PII), financial risk assessment across institutions.
Q: How do you monetize data through Snowflake Marketplace? Walk through the process.
Steps: (1) Prepare data: create tables/views with clean schemas and documentation. (2) Create a listing: in the Marketplace provider UI, define listing title, description, sample queries, and pricing (free, paid per-query, or paid subscription). (3) Set access: choose public (anyone) or request-based (you approve each consumer). (4) For paid listings: set up a Stripe account for payment processing, define pricing tiers, and configure usage tracking. (5) Auto-fulfillment: enable replication to multiple regions/clouds for global availability. Snowflake handles cross-region data movement. (6) Monitor usage via LISTING_ACCESS_HISTORY and LISTING_TELEMETRY. Revenue split: Snowflake takes a percentage of paid listing revenue. Most providers start with free listings for market visibility, then add paid premium tiers.
Not directly via Secure Data Sharing — that's Snowflake-to-Snowflake only. For non-Snowflake consumers: (1) Create a reader account (gives them Snowflake access). (2) Use Snowflake's external functions to push data to external APIs. (3) Use COPY INTO to export data to a stage (S3/GCS/Azure) that the consumer can access. (4) Use Iceberg tables for open format sharing readable by Spark, Trino, etc.
Yes — the consumer sees queries on shared objects in their QUERY_HISTORY. The provider can see that data was accessed via LISTING_TELEMETRY (for marketplace) or ACCESS_HISTORY (for direct shares, Enterprise edition). The provider cannot see the consumer's full query text — only that their shared objects were accessed, with row counts and timestamps.