add draft macros, to modify pending AN-5982

This commit is contained in:
Mike Stepanovic 2025-05-01 10:14:44 -06:00
parent 67122e16e4
commit 7d6778ff4f
21 changed files with 1217 additions and 2 deletions

64
.github/workflows/dbt_run_adhoc.yml vendored Normal file
View File

@ -0,0 +1,64 @@
name: dbt_run_adhoc
run-name: ${{ inputs.dbt_command }}
on:
workflow_dispatch:
branches:
- "main"
inputs:
environment:
type: choice
description: DBT Run Environment
required: true
options:
- dev
- prod
default: dev
warehouse:
type: choice
description: Snowflake warehouse
required: true
options:
- DBT
- DBT_CLOUD
default: DBT
dbt_command:
type: string
description: 'DBT Run Command'
required: true
env:
ACCOUNT: "${{ vars.ACCOUNT }}"
ROLE: "${{ vars.ROLE }}"
USER: "${{ vars.USER }}"
PASSWORD: "${{ secrets.PASSWORD }}"
REGION: "${{ vars.REGION }}"
DATABASE: "${{ vars.DATABASE }}"
WAREHOUSE: "${{ inputs.warehouse }}"
SCHEMA: "${{ vars.SCHEMA }}"
concurrency:
group: ${{ github.workflow }}
jobs:
run_dbt_jobs:
runs-on: ubuntu-latest
environment:
name: workflow_${{ inputs.environment }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.10"
cache: "pip"
- name: install dependencies
run: |
pip install -r requirements.txt
dbt deps
- name: Run DBT Jobs
run: |
${{ inputs.dbt_command }}

100
README.md
View File

@ -1,2 +1,98 @@
# fsc-ibc
Home of IBC related macros, models and documentation
# README for EVM dbt macros, models and documentation
[EVM Wiki & Documentation](https://github.com/FlipsideCrypto/fsc-ibc/wiki)
---
## Adding the `fsc_ibc` dbt package
The `fsc_ibc` dbt package is a centralized repository consisting of various dbt macros and snowflake functions that can be utilized across other repos.
1. Navigate to `packages.yml` in your respective repo.
2. Add the following (reference the latest version from [here](https://github.com/FlipsideCrypto/fsc-ibc/tags)):
```
- git: https://github.com/FlipsideCrypto/fsc-ibc.git
revision: "v1.1.0"
```
3. Run `dbt clean && dbt deps` to install the package
**Troubleshooting:**
If `package_lock.yml` is present, you may need to remove it and re-run `dbt deps`. This is a known issue when installing dbt packages with the same version or revision tag.
---
## Recommended Development Flow
The recommended development flow for making changes to `fsc-ibc` is as follows:
1. Create a new branch in `fsc-ibc` with your changes (e.g. `AN-1234/dummy-branch`). When ready to test in another project, push your branch to the repository.
2. In your project (e.g. `swell-models`), update the version in `packages.yml` to your branch `revision: AN-1234/dummy-branch`.
3. Run `make cleanup_time` to pull in the current remote version of your branch.
- This will delete `package-lock.yml` and run `dbt clean && dbt deps`.
4. Begin testing changes in project repository.
5. If more changes are needed to the `fsc-ibc` branch:
- Make sure to push them up and re-run `make cleanup_time` in the project.
- Note: If you do not delete `package-lock.yml`, you likely won't pull in your latest changes, even if you run `dbt clean && dbt deps`.
6. Once the `fsc-ibc` PR is ready, proceed to the [Adding Release Versions](#adding-release-versions) section.
---
## Adding Release Versions
1. First get PR approval/review before proceeding with version tagging.
2. Make the necessary changes to your code in your dbt package repository (e.g., fsc-utils).
3. Commit your changes with `git add .` and `git commit -m "Your commit message"`.
4. Push your commits to the remote repository with `git push ...`.
5. Tag your commit with a version number using `git tag -a v1.1.0 -m "version 1.1.0"`.
6. Push your tags to the remote repository with `git push origin --tags`.
7. Add official `Release` notes to the repo with the new tag.
* Each `Release` should be formatted with the following template:
```
Release Title: <vx.y.z release title>
- Description of changes
- ...
**Full Changelog**: <link to the commits included in this new version> (hint: click the "Generate Release Notes" button on the release page to automatically generate this link)
```
8. In the `packages.yml` file of your other dbt project, specify the new version of the package with:
Alternatively, you can use the `makefile` to create a new tag and push it to the remote repository:
```
make new_repo_tag
```
```
Last 3 tags:
v1.11.0
v1.10.0
v1.9.0
Enter new tag name (e.g., v1.1.0) or 'q' to quit:
```
```
vx.y.z # where x, y, and z are the new version numbers (or q to quit)
```
packages:
- git: "https://github.com/FlipsideCrypto/fsc-ibc.git"
revision: "v0.0.1"
```
Regarding Semantic Versioning;
1. Semantic versioning is a versioning scheme for software that aims to convey meaning about the underlying changes with each new release.
2. It's typically formatted as MAJOR.MINOR.PATCH (e.g. v1.2.3), where:
- MAJOR version (first number) should increment when there are potential breaking or incompatible changes.
- MINOR version (second number) should increment when functionality or features are added in a backwards-compatible manner.
- PATCH version (third number) should increment when bug fixes are made without adding new features.
1. Semantic versioning helps package users understand the degree of changes in a new release, and decide when to adopt new versions. With dbt packages, when you tag a release with a semantic version, users can specify the exact version they want to use in their projects.
---
### DBT Resources:
- Learn more about dbt [in the docs](https://docs.getdbt.com/docs/introduction)
- Check out [Discourse](https://discourse.getdbt.com/) for commonly asked questions and answers
- Join the [chat](https://community.getdbt.com/) on Slack for live discussions and support
- Find [dbt events](https://events.getdbt.com) near you
- Check out [the blog](https://blog.getdbt.com/) for the latest news on dbt's development and best practices

42
dbt_project.yml Normal file
View File

@ -0,0 +1,42 @@
# Name your project! Project names should contain only lowercase characters
# and underscores. A good package name should reflect your organization's
# name or the intended use of these models
name: "fsc_ibc"
version: "1.0.0"
config-version: 1
# This setting configures which "profile" dbt uses for this project.
profile: "fsc_ibc"
# These configurations specify where dbt should look for different types of files.
# The `model-paths` config, for example, states that models in this project can be
# found in the "models/" directory. You probably won't need to change these!
model-paths: ["models"]
analysis-paths: ["analysis"]
test-paths: ["tests"]
seed-paths: ["data"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target" # directory which will store compiled SQL files
clean-targets: # directories to be removed by `dbt clean`
- "target"
- "dbt_modules"
- "dbt_packages"
# Configuring models
# Full documentation: https://docs.getdbt.com/docs/configuring-models
# In this example config, we tell dbt to build all models in the example/ directory
# as tables. These settings can be overridden in the individual model files
# using the `{{ config(...) }}` macro.
models:
fsc_ibc:
+enabled: false # Disable all models by default
main_package: # Only enable models that are required to run in the fsc_ibc environment
admin:
+enabled: true
vars:
"dbt_date:time_zone": GMT

View File

@ -0,0 +1,55 @@
{% macro create_sps() %}
{% if var("UPDATE_UDFS_AND_SPS", false) %}
{% set prod_db_name = var('GLOBAL_PROD_DB_NAME', '') | upper %}
{% if target.database | upper == prod_db_name and target.name == 'prod' %}
{% set schema_name = var("SPS_SCHEMA_NAME", '_internal') %}
{% do run_query("CREATE SCHEMA IF NOT EXISTS " ~ schema_name) %}
{% set sp_create_prod_clone_sql %}
create or replace procedure {{ schema_name }}.create_prod_clone(source_db_name string, destination_db_name string, role_name string)
returns boolean
language javascript
execute as caller
as
$$
snowflake.execute({sqlText: `BEGIN TRANSACTION;`});
try {
snowflake.execute({sqlText: `CREATE OR REPLACE DATABASE ${DESTINATION_DB_NAME} CLONE ${SOURCE_DB_NAME}`});
snowflake.execute({sqlText: `DROP SCHEMA IF EXISTS ${DESTINATION_DB_NAME}._INTERNAL`}); /* this only needs to be in prod */
snowflake.execute({sqlText: `GRANT OWNERSHIP ON ALL SCHEMAS IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON ALL FUNCTIONS IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON ALL PROCEDURES IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON ALL VIEWS IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON ALL STAGES IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON ALL TABLES IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON FUTURE FUNCTIONS IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME};`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON FUTURE PROCEDURES IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME};`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON FUTURE VIEWS IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME};`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON FUTURE STAGES IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME};`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON FUTURE TABLES IN DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME};`});
snowflake.execute({sqlText: `GRANT OWNERSHIP ON DATABASE ${DESTINATION_DB_NAME} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`})
var existing_tags = snowflake.execute({sqlText: `SHOW TAGS IN DATABASE ${DESTINATION_DB_NAME};`});
while (existing_tags.next()) {
var schema = existing_tags.getColumnValue(4);
var tag_name = existing_tags.getColumnValue(2)
snowflake.execute({sqlText: `GRANT OWNERSHIP ON TAG ${DESTINATION_DB_NAME}.${schema}.${tag_name} TO ROLE ${ROLE_NAME} COPY CURRENT GRANTS;`});
}
snowflake.execute({sqlText: `COMMIT;`});
} catch (err) {
snowflake.execute({sqlText: `ROLLBACK;`});
throw(err);
}
return true
$$
{% endset %}
{% do run_query(sp_create_prod_clone_sql) %}
{{ log("Created stored procedure: " ~ schema_name ~ ".create_prod_clone", info=True) }}
{% endif %}
{% endif %}
{% endmacro %}

View File

@ -0,0 +1,9 @@
{% macro create_udfs() %}
{% if var("UPDATE_UDFS_AND_SPS") %}
{% set sql %}
CREATE schema if NOT EXISTS silver;
{{ create_udf_bulk_rest_api_v2() }}
{% endset %}
{% do run_query(sql) %}
{% endif %}
{% endmacro %}

View File

@ -0,0 +1,14 @@
{% macro run_sp_create_prod_clone() %}
{% set prod_db_name = var('GLOBAL_PROD_DB_NAME', '') | upper %}
{% set dev_suffix = var('DEV_DATABASE_SUFFIX', '_DEV') %}
{% set clone_role = var('CLONE_ROLE', 'internal_dev') %}
{% set clone_query %}
call {{ prod_db_name }}._internal.create_prod_clone(
'{{ prod_db_name }}',
'{{ prod_db_name }}{{ dev_suffix }}',
'{{ clone_role }}'
);
{% endset %}
{% do run_query(clone_query) %}
{% endmacro %}

View File

@ -0,0 +1,6 @@
{% macro create_ibc_streamline_udfs() %}
{% if var("UPDATE_UDFS_AND_SPS") %}
{% do run_query("CREATE SCHEMA IF NOT EXISTS streamline") %}
{{ create_udf_bulk_rest_api_v2() }}
{% endif %}
{% endmacro %}

View File

@ -0,0 +1,77 @@
{% macro streamline_external_table_query_v2(
model,
partition_function,
partition_name,
other_cols
) %}
WITH meta AS (
SELECT
LAST_MODIFIED::timestamp_ntz AS _inserted_timestamp,
file_name,
{{ partition_function }} AS {{ partition_name }}
FROM
TABLE(
information_schema.external_table_file_registration_history(
start_time => DATEADD('day', -3, CURRENT_TIMESTAMP()),
table_name => '{{ source( "bronze_streamline", model) }}')
) A
)
SELECT
{{ other_cols }},
_inserted_timestamp,
s.{{ partition_name }},
s.value AS VALUE,
file_name
FROM
{{ source(
"bronze_streamline",
model
) }}
s
JOIN
meta b
ON b.file_name = metadata$filename
AND b.{{ partition_name }} = s.{{ partition_name }}
WHERE
b.{{ partition_name }} = s.{{ partition_name }}
{% endmacro %}
{% macro streamline_external_table_FR_query_v2(
model,
partition_function,
partition_name,
other_cols
) %}
WITH meta AS (
SELECT
LAST_MODIFIED::timestamp_ntz AS _inserted_timestamp,
file_name,
{{ partition_function }} AS {{ partition_name }}
FROM
TABLE(
information_schema.external_table_files(
table_name => '{{ source( "bronze_streamline", model) }}'
)
) A
)
SELECT
{{ other_cols }},
_inserted_timestamp,
s.{{ partition_name }},
s.value AS VALUE,
file_name
FROM
{{ source(
"bronze_streamline",
model
) }}
s
JOIN
meta b
ON b.file_name = metadata$filename
AND b.{{ partition_name }} = s.{{ partition_name }}
WHERE
b.{{ partition_name }} = s.{{ partition_name }}
{% endmacro %}

View File

@ -0,0 +1,20 @@
{% macro create_udf_bulk_rest_api_v2() %}
{{ log("Creating udf udf_bulk_rest_api for target:" ~ target.name ~ ", schema: " ~ target.schema ~ ", DB: " ~ target.database, info=True) }}
{{ log("role:" ~ target.role ~ ", user:" ~ target.user, info=True) }}
{% set sql %}
CREATE OR REPLACE EXTERNAL FUNCTION streamline.udf_bulk_rest_api_v2(json object) returns array api_integration =
{% if target.name == "prod" %}
{{ log("Creating prod udf_bulk_rest_api_v2", info=True) }}
{{ var("API_INTEGRATION_PROD") }} AS 'https://{{ var("EXTERNAL_FUNCTION_URI") | lower }}udf_bulk_rest_api'
{% elif target.name == "dev" %}
{{ log("Creating dev udf_bulk_rest_api_v2", info=True) }}
{{ var("API_INTEGRATION_DEV") }} AS 'https://{{ var("EXTERNAL_FUNCTION_URI") | lower }}udf_bulk_rest_api'
{% else %}
{{ log("Creating default (dev) udf_bulk_rest_api_v2", info=True) }}
{{ var("config")["dev"]["API_INTEGRATION"] }} AS 'https://{{ var("config")["dev"]["EXTERNAL_FUNCTION_URI"] | lower }}udf_bulk_rest_api'
{% endif %};
{% endset %}
{{ log(sql, info=True) }}
{% do adapter.execute(sql) %}
{% endmacro %}

View File

@ -0,0 +1,22 @@
{% macro standard_predicate(
input_column = 'block_number'
) -%}
{%- set database_name = target.database -%}
{%- set schema_name = generate_schema_name(
node = model
) -%}
{%- set table_name = generate_alias_name(
node = model
) -%}
{%- set tmp_table_name = table_name ~ '__dbt_tmp' -%}
{%- set full_table_name = database_name ~ '.' ~ schema_name ~ '.' ~ table_name -%}
{%- set full_tmp_table_name = database_name ~ '.' ~ schema_name ~ '.' ~ tmp_table_name -%}
{{ full_table_name }}.{{ input_column }} >= (
SELECT
MIN(
{{ input_column }}
)
FROM
{{ full_tmp_table_name }}
)
{%- endmacro %}

View File

@ -0,0 +1,5 @@
{% macro add_database_or_schema_tags() %}
{% set prod_db_name = var('GLOBAL_PROD_DB_NAME', '') | upper %}
{{ set_database_tag_value('BLOCKCHAIN_NAME', prod_db_name) }}
{{ set_database_tag_value('BLOCKCHAIN_TYPE','IBC') }}
{% endmacro %}

View File

@ -0,0 +1,127 @@
{% macro apply_meta_as_tags(results) %}
{% if var("UPDATE_SNOWFLAKE_TAGS", false) %}
{{ log('apply_meta_as_tags', info=False) }}
{{ log(results, info=False) }}
{% if execute %}
{%- set tags_by_schema = {} -%}
{% for res in results -%}
{% if res.node.meta.database_tags %}
{%- set model_database = res.node.database -%}
{%- set model_schema = res.node.schema -%}
{%- set model_schema_full = model_database+'.'+model_schema -%}
{%- set model_alias = res.node.alias -%}
{% if model_schema_full not in tags_by_schema.keys() %}
{{ log('need to fetch tags for schema '+model_schema_full, info=False) }}
{%- call statement('main', fetch_result=True) -%}
show tags in {{model_database}}.{{model_schema}}
{%- endcall -%}
{%- set _ = tags_by_schema.update({model_schema_full: load_result('main')['table'].columns.get('name').values()|list}) -%}
{{ log('Added tags to cache', info=False) }}
{% else %}
{{ log('already have tag info for schema', info=False) }}
{% endif %}
{%- set current_tags_in_schema = tags_by_schema[model_schema_full] -%}
{{ log('current_tags_in_schema:', info=False) }}
{{ log(current_tags_in_schema, info=False) }}
{{ log("========== Processing tags for "+model_schema_full+"."+model_alias+" ==========", info=False) }}
{% set line -%}
node: {{ res.node.unique_id }}; status: {{ res.status }} (message: {{ res.message }})
node full: {{ res.node}}
meta: {{ res.node.meta}}
materialized: {{ res.node.config.materialized }}
{%- endset %}
{{ log(line, info=False) }}
{%- call statement('main', fetch_result=True) -%}
select LEVEL,UPPER(TAG_NAME) as TAG_NAME,TAG_VALUE from table(information_schema.tag_references_all_columns('{{model_schema}}.{{model_alias}}', 'table'))
{%- endcall -%}
{%- set existing_tags_for_table = load_result('main')['data'] -%}
{{ log('Existing tags for table:', info=False) }}
{{ log(existing_tags_for_table, info=False) }}
{{ log('--', info=False) }}
{% for table_tag in res.node.meta.database_tags.table %}
{{ fsc_ibc.create_tag_if_missing(current_tags_in_schema,table_tag|upper) }}
{% set desired_tag_value = res.node.meta.database_tags.table[table_tag] %}
{{ fsc_ibc.set_table_tag_value_if_different(model_schema,model_alias,table_tag,desired_tag_value,existing_tags_for_table)}}
{% endfor %}
{{ log("========== Finished processing tags for "+model_alias+" ==========", info=False) }}
{% endif %}
{% endfor %}
{% endif %}
{% endif %}
{% endmacro %}
{% macro create_tag_if_missing(all_tag_names,table_tag) %}
{% if table_tag not in all_tag_names %}
{{ log('Creating missing tag '+table_tag, info=False) }}
{%- call statement('main', fetch_result=True) -%}
create tag if not exists silver.{{table_tag}}
{%- endcall -%}
{{ log(load_result('main').data, info=False) }}
{% else %}
{{ log('Tag already exists: '+table_tag, info=False) }}
{% endif %}
{% endmacro %}
{% macro set_table_tag_value_if_different(model_schema,table_name,tag_name,desired_tag_value,existing_tags) %}
{{ log('Ensuring tag '+tag_name+' has value '+desired_tag_value+' at table level', info=False) }}
{%- set existing_tag_for_table = existing_tags|selectattr('0','equalto','TABLE')|selectattr('1','equalto',tag_name|upper)|list -%}
{{ log('Filtered tags for table:', info=False) }}
{{ log(existing_tag_for_table[0], info=False) }}
{% if existing_tag_for_table|length > 0 and existing_tag_for_table[0][2]==desired_tag_value %}
{{ log('Correct tag value already exists', info=False) }}
{% else %}
{{ log('Setting tag value for '+tag_name+' to value '+desired_tag_value, info=False) }}
{%- call statement('main', fetch_result=True) -%}
alter table {{model_schema}}.{{table_name}} set tag {{tag_name}} = '{{desired_tag_value}}'
{%- endcall -%}
{{ log(load_result('main').data, info=False) }}
{% endif %}
{% endmacro %}
{% macro set_column_tag_value_if_different(table_name,column_name,tag_name,desired_tag_value,existing_tags) %}
{{ log('Ensuring tag '+tag_name+' has value '+desired_tag_value+' at column level', info=False) }}
{%- set existing_tag_for_column = existing_tags|selectattr('0','equalto','COLUMN')|selectattr('1','equalto',tag_name|upper)|list -%}
{{ log('Filtered tags for column:', info=False) }}
{{ log(existing_tag_for_column[0], info=False) }}
{% if existing_tag_for_column|length > 0 and existing_tag_for_column[0][2]==desired_tag_value %}
{{ log('Correct tag value already exists', info=False) }}
{% else %}
{{ log('Setting tag value for '+tag_name+' to value '+desired_tag_value, info=False) }}
{%- call statement('main', fetch_result=True) -%}
alter table {{table_name}} modify column {{column_name}} set tag {{tag_name}} = '{{desired_tag_value}}'
{%- endcall -%}
{{ log(load_result('main').data, info=False) }}
{% endif %}
{% endmacro %}
{% macro set_database_tag_value(tag_name,tag_value) %}
{% set query %}
create tag if not exists silver.{{tag_name}}
{% endset %}
{% do run_query(query) %}
{% set query %}
alter database {{target.database}} set tag {{target.database}}.silver.{{tag_name}} = '{{tag_value}}'
{% endset %}
{% do run_query(query) %}
{% endmacro %}
{% macro set_schema_tag_value(target_schema,tag_name,tag_value) %}
{% set query %}
create tag if not exists silver.{{tag_name}}
{% endset %}
{% do run_query(query) %}
{% set query %}
alter schema {{target.database}}.{{target_schema}} set tag {{target.database}}.silver.{{tag_name}} = '{{tag_value}}'
{% endset %}
{% do run_query(query) %}
{% endmacro %}

View File

@ -0,0 +1,17 @@
{% macro release_chain(schema_name, role_name) %}
{% set prod_db_name = (target.database | replace('_dev', '') | upper) %}
{% if target.database | upper == prod_db_name and target.name == 'prod' %}
{% do run_query("GRANT USAGE ON DATABASE " ~ prod_db_name ~ " TO ROLE " ~ role_name ~ ";") %}
{% do run_query("GRANT USAGE ON SCHEMA " ~ prod_db_name ~ "." ~ schema_name ~ " TO ROLE " ~ role_name ~ ";") %}
{% do run_query("GRANT SELECT ON ALL TABLES IN SCHEMA " ~ prod_db_name ~ "." ~ schema_name ~ " TO ROLE " ~ role_name ~ ";") %}
{% do run_query("GRANT SELECT ON ALL VIEWS IN SCHEMA " ~ prod_db_name ~ "." ~ schema_name ~ " TO ROLE " ~ role_name ~ ";") %}
{% do run_query("GRANT SELECT ON FUTURE TABLES IN SCHEMA " ~ schema_name ~ " TO ROLE " ~ role_name ~ ";") %}
{% do run_query("GRANT SELECT ON FUTURE VIEWS IN SCHEMA " ~ schema_name ~ " TO ROLE " ~ role_name ~ ";") %}
{{ log("Permissions granted to role " ~ role_name ~ " for schema " ~ schema_name, info=True) }}
{% else %}
{{ log("Not granting SELECT on future tables and views in schema " ~ schema_name ~ " to role " ~ role_name ~ " because target is not prod", info=True) }}
{% endif %}
{% endmacro %}

View File

@ -0,0 +1,233 @@
{% macro vars_config(all_projects=false) %}
{# Retrieves variable configurations and values from project-specific macros.
When all_projects=True, gets variables from all projects.
Otherwise, gets variables only for the current project based on target database. #}
{# Initialize empty dictionary for all variables #}
{% set target_vars = {} %}
{# Determine current project based on database name #}
{% set target_db = target.database.lower() | replace('_dev', '') %}
{% if all_projects %}
{# Get all macro names in the context #}
{% set all_macros = context.keys() %}
{# Filter for project variable macros (those ending with _vars) #}
{% for macro_name in all_macros %}
{% if macro_name.endswith('_vars') %}
{# Extract project name from macro name #}
{% set project_name = macro_name.replace('_vars', '') %}
{# Call the project macro and add to target_vars #}
{% set project_config = context[macro_name]() %}
{# Only include if the result is a mapping #}
{% if project_config is mapping %}
{% do target_vars.update({project_name: project_config}) %}
{% endif %}
{% endif %}
{% endfor %}
{% else %}
{# Construct the macro name for this project #}
{% set project_macro = target_db ~ '_vars' %}
{# Try to call the macro directly #}
{% if context.get(project_macro) is not none %}
{% set project_config = context[project_macro]() %}
{% do target_vars.update({target_db: project_config}) %}
{% endif %}
{% endif %}
{{ return(target_vars) }}
{% endmacro %}
{% macro flatten_vars() %}
{# Converts the nested variable structure from vars_config() into a flat list of dictionaries.
Each dictionary contains project, key, parent_key, value properties. #}
{# Get the nested structure from vars_config() #}
{% set nested_vars = vars_config() %}
{# Convert the nested structure to the flat format expected by get_var() #}
{% set flat_vars = [] %}
{% for project, vars in nested_vars.items() %}
{% for key, value in vars.items() %}
{% if value is mapping %}
{# Handle nested mappings (where parent_key is not none) #}
{% for subkey, subvalue in value.items() %}
{% do flat_vars.append({
'project': project,
'key': subkey,
'parent_key': key,
'value': subvalue
}) %}
{% endfor %}
{% else %}
{% do flat_vars.append({
'project': project,
'key': key,
'parent_key': none,
'value': value
}) %}
{% endif %}
{% endfor %}
{% endfor %}
{{ return(flat_vars) }}
{% endmacro %}
{% macro write_vars(variable_key, default) %}
{# Logs variable information to the terminal and a table in the database.
Dependent on WRITE_VARS_ENABLED and execute flags. #}
{% if var('WRITE_VARS_ENABLED', false) and execute %}
{% set package = variable_key.split('_')[0] %}
{% set category = variable_key.split('_')[1] %}
{# Determine the data type of the default value #}
{% if default is not none %}
{% if default is string %}
{% set default_type = 'STRING' %}
{% set default_value = '\'\'' ~ default ~ '\'\'' %}
{% elif default is number %}
{% set default_type = 'NUMBER' %}
{% set default_value = default %}
{% elif default is boolean %}
{% set default_type = 'BOOLEAN' %}
{% set default_value = default %}
{% elif default is mapping %}
{% set default_type = 'OBJECT' %}
{% set default_value = default | tojson %}
{% elif default is sequence and default is not string %}
{% set default_type = 'ARRAY' %}
{% set default_value = default | tojson %}
{% elif default is iterable and default is not string %}
{% set default_type = 'ITERABLE' %}
{% set default_value = 'ITERABLE' %}
{% else %}
{% set default_type = 'UNKNOWN' %}
{% set default_value = 'UNKNOWN' %}
{% endif %}
{% else %}
{% set default_type = none %}
{% set default_value = none %}
{% endif %}
{% set log_query %}
CREATE TABLE IF NOT EXISTS {{target.database}}.admin._master_keys (
package STRING,
category STRING,
variable_key STRING,
default_value STRING,
default_type STRING,
_inserted_timestamp TIMESTAMP_NTZ DEFAULT SYSDATE()
);
INSERT INTO {{target.database}}.admin._master_keys (
package,
category,
variable_key,
default_value,
default_type
)
VALUES (
'{{ package }}',
'{{ category }}',
'{{ variable_key }}',
'{{ default_value }}',
'{{ default_type }}'
);
{% endset %}
{% do run_query(log_query) %}
{# Update terminal logs to include type information #}
{% do log(package ~ "|" ~ category ~ "|" ~ variable_key ~ "|" ~ default_value ~ "|" ~ default_type, info=True) %}
{% endif %}
{% endmacro %}
{% macro get_var(variable_key, default=none) %}
{# Retrieves a variable by key from either dbt's built-in var() function or from project configs.
Handles type conversion for strings, numbers, booleans, arrays, and JSON objects.
Returns the default value if the variable is not found. #}
{# Log variable info if enabled #}
{% do write_vars(variable_key, default) %}
{# Check if variable exists in dbt's built-in var() function. If it does, return the value. #}
{% if var(variable_key, none) is not none %}
{{ return(var(variable_key)) }}
{% endif %}
{# Get flattened variables from the config file #}
{% set all_vars = flatten_vars() %}
{% if execute %}
{# Filter variables based on the requested key #}
{% set filtered_vars = [] %}
{% for var_item in all_vars %}
{% if (var_item.key == variable_key or var_item.parent_key == variable_key) %}
{% do filtered_vars.append(var_item) %}
{% endif %}
{% endfor %}
{# If no results found, return the default value #}
{% if filtered_vars | length == 0 %}
{{ return(default) }}
{% endif %}
{% set first_var = filtered_vars[0] %}
{% set parent_key = first_var.parent_key %}
{% set value = first_var.value %}
{# Check if this is a simple variable (no parent key) or a mapping (has parent key) #}
{% if parent_key is none or parent_key == '' %}
{# Infer data type from value #}
{% if value is string %}
{% if value.startswith('[') and value.endswith(']') %}
{# For array type, parse and convert values to appropriate types #}
{% set array_values = value.strip('[]').split(',') %}
{% set converted_array = [] %}
{% for val in array_values %}
{% set stripped_val = val.strip() %}
{% if stripped_val.isdigit() %}
{% do converted_array.append(stripped_val | int) %}
{% elif stripped_val.replace('.','',1).isdigit() %}
{% do converted_array.append(stripped_val | float) %}
{% elif stripped_val.lower() in ['true', 'false'] %}
{% do converted_array.append(stripped_val.lower() == 'true') %}
{% else %}
{% do converted_array.append(stripped_val) %}
{% endif %}
{% endfor %}
{{ return(converted_array) }}
{% elif value.startswith('{') and value.endswith('}') and ':' in value %}
{# For JSON, VARIANT, OBJECT #}
{{ return(fromjson(value)) }}
{% elif value.isdigit() %}
{{ return(value | int) }}
{% elif value.replace('.','',1).isdigit() %}
{{ return(value | float) }}
{% elif value.lower() in ['true', 'false'] %}
{{ return(value.lower() == 'true') }}
{% else %}
{{ return(value) }}
{% endif %}
{% else %}
{# If value is already a non-string type (int, bool, etc.) #}
{{ return(value) }}
{% endif %}
{% else %}
{# For variables with a parent_key, build a dictionary of all child values #}
{% set mapping = {} %}
{% for var_item in filtered_vars %}
{# key: value pairings based on parent_key #}
{% do mapping.update({var_item.key: var_item.value}) %}
{% endfor %}
{{ return(mapping) }}
{% endif %}
{% else %}
{{ return(default) }}
{% endif %}
{% endmacro %}

View File

@ -0,0 +1,15 @@
{% macro terra_vars() %}
{% set vars = {
'API_INTEGRATION_PROD': 'aws_ton_api_prod_v2',
'API_INTEGRATION_DEV': 'aws_ton_api_dev_v2',
'GLOBAL_PROJECT_NAME': 'terra',
'GLOBAL_NODE_PROVIDER': 'quicknode',
'GLOBAL_NODE_VAULT_PATH': 'Vault/prod/terra/quicknode/mainnet',
'GLOBAL_NODE_URL': '{service}/{Authentication}',
'GLOBAL_WRAPPED_NATIVE_ASSET_ADDRESS': '0x82af49447d8a07e3bd95bd0d56f35241523fbab1',
'MAIN_SL_BLOCKS_PER_HOUR': 14200,
'MAIN_PRICES_NATIVE_SYMBOLS': 'ETH'
} %}
{{ return(vars) }}
{% endmacro %}

View File

@ -0,0 +1,213 @@
{% macro return_vars() %}
{# This macro sets and returns all configurable variables used throughout the project,
organizing them by category (Global, Bronze, Silver, Streamline, Decoder etc.) with default values.
IMPORTANT: Only call get_var() once per variable #}
{# Set all variables on the namespace #}
{% set ns = namespace() %}
{# Set Variables and Default Values, organized by category #}
{# Global Variables #}
{% set ns.GLOBAL_PROJECT_NAME = get_var('GLOBAL_PROJECT_NAME', '') %}
{% set ns.GLOBAL_NODE_PROVIDER = get_var('GLOBAL_NODE_PROVIDER', '') %}
{% set ns.GLOBAL_NODE_URL = get_var('GLOBAL_NODE_URL', '{Service}/{Authentication}') %}
{% set ns.GLOBAL_WRAPPED_NATIVE_ASSET_ADDRESS = get_var('GLOBAL_WRAPPED_NATIVE_ASSET_ADDRESS', '') %}
{% set ns.GLOBAL_MAX_SEQUENCE_NUMBER = get_var('GLOBAL_MAX_SEQUENCE_NUMBER', 1000000000) %}
{% set ns.GLOBAL_NODE_VAULT_PATH = get_var('GLOBAL_NODE_VAULT_PATH', '') %}
{% set ns.GLOBAL_NETWORK = get_var('GLOBAL_NETWORK', 'mainnet') %}
{% set ns.GLOBAL_BRONZE_FR_ENABLED = none if get_var('GLOBAL_BRONZE_FR_ENABLED', false) else false %} {# Sets to none if true, still requires --full-refresh, otherwise will use incremental #}
{% set ns.GLOBAL_SILVER_FR_ENABLED = none if get_var('GLOBAL_SILVER_FR_ENABLED', false) else false %}
{% set ns.GLOBAL_GOLD_FR_ENABLED = none if get_var('GLOBAL_GOLD_FR_ENABLED', false) else false %}
{% set ns.GLOBAL_STREAMLINE_FR_ENABLED = none if get_var('GLOBAL_STREAMLINE_FR_ENABLED', false) else false %}
{% set ns.GLOBAL_NEW_BUILD_ENABLED = get_var('GLOBAL_NEW_BUILD_ENABLED', false) %}
{# GHA Workflow Variables #}
{% set ns.MAIN_GHA_STREAMLINE_CHAINHEAD_CRON = get_var('MAIN_GHA_STREAMLINE_CHAINHEAD_CRON', '0,30 * * * *') %}
{% set ns.MAIN_GHA_SCHEDULED_MAIN_CRON = get_var('MAIN_GHA_SCHEDULED_MAIN_CRON', none) %}
{% set ns.MAIN_GHA_SCHEDULED_CURATED_CRON = get_var('MAIN_GHA_SCHEDULED_CURATED_CRON', none) %}
{% set ns.MAIN_GHA_SCHEDULED_ABIS_CRON = get_var('MAIN_GHA_SCHEDULED_ABIS_CRON', none) %}
{% set ns.MAIN_GHA_SCHEDULED_SCORES_CRON = get_var('MAIN_GHA_SCHEDULED_SCORES_CRON', none) %}
{% set ns.MAIN_GHA_TEST_DAILY_CRON = get_var('MAIN_GHA_TEST_DAILY_CRON', none) %}
{% set ns.MAIN_GHA_TEST_INTRADAY_CRON = get_var('MAIN_GHA_TEST_INTRADAY_CRON', none) %}
{% set ns.MAIN_GHA_TEST_MONTHLY_CRON = get_var('MAIN_GHA_TEST_MONTHLY_CRON', none) %}
{% set ns.MAIN_GHA_HEAL_MODELS_CRON = get_var('MAIN_GHA_HEAL_MODELS_CRON', none) %}
{% set ns.MAIN_GHA_FULL_OBSERVABILITY_CRON = get_var('MAIN_GHA_FULL_OBSERVABILITY_CRON', none) %}
{% set ns.MAIN_GHA_DEV_REFRESH_CRON = get_var('MAIN_GHA_DEV_REFRESH_CRON', none) %}
{% set ns.MAIN_GHA_STREAMLINE_DECODER_HISTORY_CRON = get_var('MAIN_GHA_STREAMLINE_DECODER_HISTORY_CRON', none) %}
{# Core Variables #}
{% set ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED = get_var('MAIN_CORE_RECEIPTS_BY_HASH_ENABLED', false) %}
{% set ns.MAIN_CORE_TRACES_ARB_MODE = ns.GLOBAL_PROJECT_NAME.upper() == 'ARBITRUM' %}
{% set ns.MAIN_CORE_TRACES_SEI_MODE = ns.GLOBAL_PROJECT_NAME.upper() == 'SEI' %}
{% set ns.MAIN_CORE_TRACES_KAIA_MODE = ns.GLOBAL_PROJECT_NAME.upper() == 'KAIA' %}
{# Core Silver Variables #}
{% set ns.MAIN_CORE_SILVER_RECEIPTS_UNIQUE_KEY = 'tx_hash' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'block_number' %}
{% set ns.MAIN_CORE_SILVER_RECEIPTS_SOURCE_NAME = 'RECEIPTS_BY_HASH' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'RECEIPTS' %}
{% set ns.MAIN_CORE_SILVER_RECEIPTS_POST_HOOK = "ALTER TABLE {{ this }} ADD SEARCH OPTIMIZATION on equality(tx_hash, block_number)" if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else "ALTER TABLE {{ this }} ADD SEARCH OPTIMIZATION on equality(array_index, block_number)" %}
{% set ns.MAIN_CORE_SILVER_CONFIRM_BLOCKS_FULL_RELOAD_ENABLED = get_var('MAIN_CORE_SILVER_CONFIRM_BLOCKS_FULL_RELOAD_ENABLED', false) %}
{% set ns.MAIN_CORE_SILVER_TRACES_FULL_RELOAD_ENABLED = get_var('MAIN_CORE_SILVER_TRACES_FULL_RELOAD_ENABLED', false) %}
{% set ns.MAIN_CORE_SILVER_TRACES_FR_MAX_BLOCK = get_var('MAIN_CORE_SILVER_TRACES_FR_MAX_BLOCK', 1000000) %}
{% set ns.MAIN_CORE_SILVER_TRACES_FULL_RELOAD_BLOCKS_PER_RUN = get_var('MAIN_CORE_SILVER_TRACES_FULL_RELOAD_BLOCKS_PER_RUN', 1000000) %}
{% set ns.MAIN_CORE_SILVER_TRACES_PARTITION_KEY_ENABLED = get_var('MAIN_CORE_SILVER_TRACES_PARTITION_KEY_ENABLED', true) %}
{# Core Gold Variables #}
{% set ns.MAIN_CORE_GOLD_FACT_TRANSACTIONS_UNIQUE_KEY = 'tx_hash' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'block_number' %}
{% set ns.MAIN_CORE_GOLD_EZ_NATIVE_TRANSFERS_UNIQUE_KEY = 'tx_hash' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'block_number' %}
{% set ns.MAIN_CORE_GOLD_EZ_NATIVE_TRANSFERS_PRICES_START_DATE = get_var('MAIN_CORE_GOLD_EZ_NATIVE_TRANSFERS_PRICES_START_DATE','2024-01-01') %}
{% set ns.MAIN_CORE_GOLD_EZ_TOKEN_TRANSFERS_UNIQUE_KEY = 'tx_hash' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'block_number' %}
{% set ns.MAIN_CORE_GOLD_FACT_EVENT_LOGS_UNIQUE_KEY = 'tx_hash' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'block_number' %}
{% set ns.MAIN_CORE_GOLD_TRACES_FULL_RELOAD_ENABLED = get_var('MAIN_CORE_GOLD_TRACES_FULL_RELOAD_ENABLED', false) %}
{% set ns.MAIN_CORE_GOLD_TRACES_FR_MAX_BLOCK = get_var('MAIN_CORE_GOLD_TRACES_FR_MAX_BLOCK', 1000000) %}
{% set ns.MAIN_CORE_GOLD_TRACES_FULL_RELOAD_BLOCKS_PER_RUN = get_var('MAIN_CORE_GOLD_TRACES_FULL_RELOAD_BLOCKS_PER_RUN', 1000000) %}
{% set ns.MAIN_CORE_GOLD_TRACES_TX_STATUS_ENABLED = get_var('MAIN_CORE_GOLD_TRACES_TX_STATUS_ENABLED', false) %}
{% set ns.MAIN_CORE_GOLD_TRACES_SCHEMA_NAME = get_var('MAIN_CORE_GOLD_TRACES_SCHEMA_NAME', 'silver') %}
{% if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED %}
{% if ns.MAIN_CORE_TRACES_SEI_MODE %}
{% set ns.MAIN_CORE_GOLD_TRACES_UNIQUE_KEY = "concat(block_number, '-', tx_hash)" %}
{% else %}
{% set ns.MAIN_CORE_GOLD_TRACES_UNIQUE_KEY = "concat(block_number, '-', tx_position)" %}
{% endif %}
{% else %}
{% set ns.MAIN_CORE_GOLD_TRACES_UNIQUE_KEY = "block_number" %}
{% endif %}
{# Main Streamline Variables #}
{% set ns.MAIN_SL_BLOCKS_PER_HOUR = get_var('MAIN_SL_BLOCKS_PER_HOUR', 1) %}
{% set ns.MAIN_SL_TRANSACTIONS_PER_BLOCK = get_var('MAIN_SL_TRANSACTIONS_PER_BLOCK', 1) %}
{% set ns.MAIN_SL_TESTING_LIMIT = get_var('MAIN_SL_TESTING_LIMIT', none) %}
{% set ns.MAIN_SL_NEW_BUILD_ENABLED = get_var('MAIN_SL_NEW_BUILD_ENABLED', false) %}
{% set ns.MAIN_SL_MIN_BLOCK = get_var('MAIN_SL_MIN_BLOCK', none) %}
{% set ns.MAIN_SL_CHAINHEAD_DELAY_MINUTES = get_var('MAIN_SL_CHAINHEAD_DELAY_MINUTES', 3) %}
{% set ns.MAIN_SL_BLOCK_LOOKBACK_ENABLED = get_var('MAIN_SL_BLOCK_LOOKBACK_ENABLED', true) %}
{# SL Blocks Transactions Variables #}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_SQL_LIMIT = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_SQL_LIMIT', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_PRODUCER_BATCH_SIZE', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_WORKER_BATCH_SIZE = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_REALTIME_ASYNC_CONCURRENT_REQUESTS', 100) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_SQL_LIMIT = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_SQL_LIMIT', 1000 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_PRODUCER_BATCH_SIZE', 10 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_WORKER_BATCH_SIZE = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_BLOCKS_TRANSACTIONS_HISTORY_ASYNC_CONCURRENT_REQUESTS', 10) %}
{# SL Receipts Variables #}
{% set ns.MAIN_SL_RECEIPTS_REALTIME_SQL_LIMIT = get_var('MAIN_SL_RECEIPTS_REALTIME_SQL_LIMIT', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_RECEIPTS_REALTIME_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_REALTIME_PRODUCER_BATCH_SIZE', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_RECEIPTS_REALTIME_WORKER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_REALTIME_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_RECEIPTS_REALTIME_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_RECEIPTS_REALTIME_ASYNC_CONCURRENT_REQUESTS', 100) %}
{% set ns.MAIN_SL_RECEIPTS_HISTORY_SQL_LIMIT = get_var('MAIN_SL_RECEIPTS_HISTORY_SQL_LIMIT', 1000 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_RECEIPTS_HISTORY_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_HISTORY_PRODUCER_BATCH_SIZE', 10 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_RECEIPTS_HISTORY_WORKER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_HISTORY_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_RECEIPTS_HISTORY_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_RECEIPTS_HISTORY_ASYNC_CONCURRENT_REQUESTS', 10) %}
{# SL Receipts By Hash Variables #}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_REALTIME_SQL_LIMIT = get_var('MAIN_SL_RECEIPTS_BY_HASH_REALTIME_SQL_LIMIT', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR * ns.MAIN_SL_TRANSACTIONS_PER_BLOCK) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_REALTIME_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_BY_HASH_REALTIME_PRODUCER_BATCH_SIZE', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR * ns.MAIN_SL_TRANSACTIONS_PER_BLOCK) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_REALTIME_WORKER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_BY_HASH_REALTIME_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR * ns.MAIN_SL_TRANSACTIONS_PER_BLOCK) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_REALTIME_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_RECEIPTS_BY_HASH_REALTIME_ASYNC_CONCURRENT_REQUESTS', 100) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_REALTIME_TXNS_MODEL_ENABLED = get_var('MAIN_SL_RECEIPTS_BY_HASH_REALTIME_TXNS_MODEL_ENABLED', true) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_HISTORY_SQL_LIMIT = get_var('MAIN_SL_RECEIPTS_BY_HASH_HISTORY_SQL_LIMIT', 1000 * ns.MAIN_SL_BLOCKS_PER_HOUR * ns.MAIN_SL_TRANSACTIONS_PER_BLOCK) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_HISTORY_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_BY_HASH_HISTORY_PRODUCER_BATCH_SIZE', 10 * ns.MAIN_SL_BLOCKS_PER_HOUR * ns.MAIN_SL_TRANSACTIONS_PER_BLOCK) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_HISTORY_WORKER_BATCH_SIZE = get_var('MAIN_SL_RECEIPTS_BY_HASH_HISTORY_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR * ns.MAIN_SL_TRANSACTIONS_PER_BLOCK) %}
{% set ns.MAIN_SL_RECEIPTS_BY_HASH_HISTORY_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_RECEIPTS_BY_HASH_HISTORY_ASYNC_CONCURRENT_REQUESTS', 10) %}
{# SL Traces Variables #}
{% set ns.MAIN_SL_TRACES_REALTIME_SQL_LIMIT = get_var('MAIN_SL_TRACES_REALTIME_SQL_LIMIT', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_TRACES_REALTIME_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_TRACES_REALTIME_PRODUCER_BATCH_SIZE', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_TRACES_REALTIME_WORKER_BATCH_SIZE = get_var('MAIN_SL_TRACES_REALTIME_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_TRACES_REALTIME_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_TRACES_REALTIME_ASYNC_CONCURRENT_REQUESTS', 100) %}
{% set ns.MAIN_SL_TRACES_REALTIME_REQUEST_START_BLOCK = get_var('MAIN_SL_TRACES_REALTIME_REQUEST_START_BLOCK', none) %}
{% set ns.MAIN_SL_TRACES_HISTORY_SQL_LIMIT = get_var('MAIN_SL_TRACES_HISTORY_SQL_LIMIT', 1000 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_TRACES_HISTORY_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_TRACES_HISTORY_PRODUCER_BATCH_SIZE', 10 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_TRACES_HISTORY_WORKER_BATCH_SIZE = get_var('MAIN_SL_TRACES_HISTORY_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_TRACES_HISTORY_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_TRACES_HISTORY_ASYNC_CONCURRENT_REQUESTS', 10) %}
{% set ns.MAIN_SL_TRACES_HISTORY_REQUEST_START_BLOCK = get_var('MAIN_SL_TRACES_HISTORY_REQUEST_START_BLOCK', none) %}
{# SL Confirm Blocks Variables #}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_REALTIME_SQL_LIMIT = get_var('MAIN_SL_CONFIRM_BLOCKS_REALTIME_SQL_LIMIT', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_REALTIME_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_CONFIRM_BLOCKS_REALTIME_PRODUCER_BATCH_SIZE', 2 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_REALTIME_WORKER_BATCH_SIZE = get_var('MAIN_SL_CONFIRM_BLOCKS_REALTIME_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_REALTIME_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_CONFIRM_BLOCKS_REALTIME_ASYNC_CONCURRENT_REQUESTS', 100) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_HISTORY_SQL_LIMIT = get_var('MAIN_SL_CONFIRM_BLOCKS_HISTORY_SQL_LIMIT', 1000 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_HISTORY_PRODUCER_BATCH_SIZE = get_var('MAIN_SL_CONFIRM_BLOCKS_HISTORY_PRODUCER_BATCH_SIZE', 10 * ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_HISTORY_WORKER_BATCH_SIZE = get_var('MAIN_SL_CONFIRM_BLOCKS_HISTORY_WORKER_BATCH_SIZE', ns.MAIN_SL_BLOCKS_PER_HOUR) %}
{% set ns.MAIN_SL_CONFIRM_BLOCKS_HISTORY_ASYNC_CONCURRENT_REQUESTS = get_var('MAIN_SL_CONFIRM_BLOCKS_HISTORY_ASYNC_CONCURRENT_REQUESTS', 10) %}
{# Decoder SL Variables #}
{% set ns.DECODER_SL_TESTING_LIMIT = get_var('DECODER_SL_TESTING_LIMIT', none) %}
{% set ns.DECODER_SL_NEW_BUILD_ENABLED = get_var('DECODER_SL_NEW_BUILD_ENABLED', false) %}
{# SL Decoded Logs Variables #}
{% set ns.DECODER_SL_DECODED_LOGS_REALTIME_EXTERNAL_TABLE = get_var('DECODER_SL_DECODED_LOGS_REALTIME_EXTERNAL_TABLE', 'decoded_logs') %}
{% set ns.DECODER_SL_DECODED_LOGS_REALTIME_SQL_LIMIT = get_var('DECODER_SL_DECODED_LOGS_REALTIME_SQL_LIMIT', 10000000) %}
{% set ns.DECODER_SL_DECODED_LOGS_REALTIME_PRODUCER_BATCH_SIZE = get_var('DECODER_SL_DECODED_LOGS_REALTIME_PRODUCER_BATCH_SIZE', 5000000) %}
{% set ns.DECODER_SL_DECODED_LOGS_REALTIME_WORKER_BATCH_SIZE = get_var('DECODER_SL_DECODED_LOGS_REALTIME_WORKER_BATCH_SIZE',500000) %}
{% set ns.DECODER_SL_DECODED_LOGS_HISTORY_EXTERNAL_TABLE = get_var('DECODER_SL_DECODED_LOGS_HISTORY_EXTERNAL_TABLE', 'decoded_logs_history') %}
{% set ns.DECODER_SL_DECODED_LOGS_HISTORY_SQL_LIMIT = get_var('DECODER_SL_DECODED_LOGS_HISTORY_SQL_LIMIT', 8000000) %}
{% set ns.DECODER_SL_DECODED_LOGS_HISTORY_PRODUCER_BATCH_SIZE = get_var('DECODER_SL_DECODED_LOGS_HISTORY_PRODUCER_BATCH_SIZE', 400000) %}
{% set ns.DECODER_SL_DECODED_LOGS_HISTORY_WORKER_BATCH_SIZE = get_var('DECODER_SL_DECODED_LOGS_HISTORY_WORKER_BATCH_SIZE', 100000) %}
{% set ns.DECODER_SL_DECODED_LOGS_HISTORY_WAIT_SECONDS = get_var('DECODER_SL_DECODED_LOGS_HISTORY_WAIT_SECONDS', 60) %}
{# SL Contract ABIs Variables #}
{% set ns.DECODER_SL_CONTRACT_ABIS_REALTIME_SQL_LIMIT = get_var('DECODER_SL_CONTRACT_ABIS_REALTIME_SQL_LIMIT', 100) %}
{% set ns.DECODER_SL_CONTRACT_ABIS_REALTIME_PRODUCER_BATCH_SIZE = get_var('DECODER_SL_CONTRACT_ABIS_REALTIME_PRODUCER_BATCH_SIZE', 1) %}
{% set ns.DECODER_SL_CONTRACT_ABIS_REALTIME_WORKER_BATCH_SIZE = get_var('DECODER_SL_CONTRACT_ABIS_REALTIME_WORKER_BATCH_SIZE', 1) %}
{% set ns.DECODER_SL_CONTRACT_ABIS_INTERACTION_COUNT = get_var('DECODER_SL_CONTRACT_ABIS_INTERACTION_COUNT', 50) %}
{% set ns.DECODER_SL_CONTRACT_ABIS_EXPLORER_URL = get_var('DECODER_SL_CONTRACT_ABIS_EXPLORER_URL', '') %}
{% set ns.DECODER_SL_CONTRACT_ABIS_EXPLORER_URL_SUFFIX = get_var('DECODER_SL_CONTRACT_ABIS_EXPLORER_URL_SUFFIX', '') %}
{% set ns.DECODER_SL_CONTRACT_ABIS_EXPLORER_VAULT_PATH = get_var('DECODER_SL_CONTRACT_ABIS_EXPLORER_VAULT_PATH', '') %}
{% set ns.DECODER_SL_CONTRACT_ABIS_BRONZE_TABLE_ENABLED = get_var('DECODER_SL_CONTRACT_ABIS_BRONZE_TABLE_ENABLED', false) %}
{# ABIs Silver Variables #}
{% set ns.DECODER_SILVER_CONTRACT_ABIS_EXPLORER_NAME = get_var('DECODER_SILVER_CONTRACT_ABIS_EXPLORER_NAME', '') %}
{% set ns.DECODER_SILVER_CONTRACT_ABIS_ETHERSCAN_ENABLED = get_var('DECODER_SILVER_CONTRACT_ABIS_ETHERSCAN_ENABLED', false) %}
{% set ns.DECODER_SILVER_CONTRACT_ABIS_RESULT_ENABLED = get_var('DECODER_SILVER_CONTRACT_ABIS_RESULT_ENABLED', false) %}
{# Observability Variables #}
{% set ns.MAIN_OBSERV_FULL_TEST_ENABLED = get_var('MAIN_OBSERV_FULL_TEST_ENABLED', false) %}
{% set ns.MAIN_OBSERV_BLOCKS_EXCLUSION_LIST_ENABLED = get_var('MAIN_OBSERV_BLOCKS_EXCLUSION_LIST_ENABLED', false) %}
{% set ns.MAIN_OBSERV_LOGS_EXCLUSION_LIST_ENABLED = get_var('MAIN_OBSERV_LOGS_EXCLUSION_LIST_ENABLED', false) %}
{% set ns.MAIN_OBSERV_RECEIPTS_EXCLUSION_LIST_ENABLED = get_var('MAIN_OBSERV_RECEIPTS_EXCLUSION_LIST_ENABLED', false) %}
{% set ns.MAIN_OBSERV_TRACES_EXCLUSION_LIST_ENABLED = get_var('MAIN_OBSERV_TRACES_EXCLUSION_LIST_ENABLED', false) %}
{% set ns.MAIN_OBSERV_TRANSACTIONS_EXCLUSION_LIST_ENABLED = get_var('MAIN_OBSERV_TRANSACTIONS_EXCLUSION_LIST_ENABLED', false) %}
{# Prices Variables #}
{% set ns.MAIN_PRICES_NATIVE_SYMBOLS = get_var('MAIN_PRICES_NATIVE_SYMBOLS', '') %}
{% set ns.MAIN_PRICES_NATIVE_BLOCKCHAINS = get_var('MAIN_PRICES_NATIVE_BLOCKCHAINS', ns.GLOBAL_PROJECT_NAME.lower()) %}
{% set ns.MAIN_PRICES_PROVIDER_PLATFORMS = get_var('MAIN_PRICES_PROVIDER_PLATFORMS', '') %}
{% set ns.MAIN_PRICES_TOKEN_ADDRESSES = get_var('MAIN_PRICES_TOKEN_ADDRESSES', none) %}
{% set ns.MAIN_PRICES_TOKEN_BLOCKCHAINS = get_var('MAIN_PRICES_TOKEN_BLOCKCHAINS', ns.GLOBAL_PROJECT_NAME.lower()) %}
{# Labels Variables #}
{% set ns.MAIN_LABELS_BLOCKCHAINS = get_var('MAIN_LABELS_BLOCKCHAINS', ns.GLOBAL_PROJECT_NAME.lower()) %}
{# Scores Variables #}
{% set ns.SCORES_FULL_RELOAD_ENABLED = get_var('SCORES_FULL_RELOAD_ENABLED', false) %}
{% set ns.SCORES_LIMIT_DAYS = get_var('SCORES_LIMIT_DAYS', 30) %}
{# NFT Variables #}
{% set ns.MAIN_NFT_TRANSFERS_UNIQUE_KEY = 'tx_hash' if ns.MAIN_CORE_RECEIPTS_BY_HASH_ENABLED else 'block_number' %}
{# Vertex Variables #}
{% set ns.CURATED_VERTEX_OFFCHAIN_EXCHANGE_CONTRACT = get_var('CURATED_VERTEX_OFFCHAIN_EXCHANGE_CONTRACT', '') %}
{% set ns.CURATED_VERTEX_CLEARINGHOUSE_CONTRACT = get_var('CURATED_VERTEX_CLEARINGHOUSE_CONTRACT', '') %}
{% set ns.CURATED_VERTEX_TOKEN_MAPPING = get_var('CURATED_VERTEX_TOKEN_MAPPING', {}) %}
{# Return the entire namespace as a dictionary #}
{{ return(ns) }}
{% endmacro %}

View File

@ -0,0 +1,141 @@
{% macro streamline_external_table_query(
source_name,
source_version,
partition_function,
balances,
block_number,
uses_receipts_by_hash
) %}
{% if source_version != '' %}
{% set source_version = '_' ~ source_version.lower() %}
{% endif %}
WITH meta AS (
SELECT
job_created_time AS _inserted_timestamp,
file_name,
{{ partition_function }} AS partition_key
FROM
TABLE(
information_schema.external_table_file_registration_history(
start_time => DATEADD('day', -3, CURRENT_TIMESTAMP()),
table_name => '{{ source( "bronze_streamline", source_name ~ source_version) }}')
) A
)
SELECT
s.*,
b.file_name,
b._inserted_timestamp
{% if balances %},
r.block_timestamp :: TIMESTAMP AS block_timestamp
{% endif %}
{% if block_number %},
COALESCE(
s.value :"BLOCK_NUMBER" :: STRING,
s.metadata :request :"data" :id :: STRING,
PARSE_JSON(
s.metadata :request :"data"
) :id :: STRING
) :: INT AS block_number
{% endif %}
{% if uses_receipts_by_hash %},
s.value :"TX_HASH" :: STRING AS tx_hash
{% endif %}
FROM
{{ source(
"bronze_streamline",
source_name ~ source_version
) }}
s
JOIN meta b
ON b.file_name = metadata$filename
AND b.partition_key = s.partition_key
{% if balances %}
JOIN {{ ref('_block_ranges') }}
r
ON r.block_number = COALESCE(
s.value :"BLOCK_NUMBER" :: INT,
s.value :"block_number" :: INT
)
{% endif %}
WHERE
b.partition_key = s.partition_key
AND DATA :error IS NULL
AND DATA IS NOT NULL
{% endmacro %}
{% macro streamline_external_table_query_fr(
source_name,
source_version,
partition_function,
partition_join_key,
balances,
block_number,
uses_receipts_by_hash
) %}
{% if source_version != '' %}
{% set source_version = '_' ~ source_version.lower() %}
{% endif %}
WITH meta AS (
SELECT
registered_on AS _inserted_timestamp,
file_name,
{{ partition_function }} AS partition_key
FROM
TABLE(
information_schema.external_table_files(
table_name => '{{ source( "bronze_streamline", source_name ~ source_version) }}'
)
) A
)
SELECT
s.*,
b.file_name,
b._inserted_timestamp
{% if balances %},
r.block_timestamp :: TIMESTAMP AS block_timestamp
{% endif %}
{% if block_number %},
COALESCE(
s.value :"BLOCK_NUMBER" :: STRING,
s.value :"block_number" :: STRING,
s.metadata :request :"data" :id :: STRING,
PARSE_JSON(
s.metadata :request :"data"
) :id :: STRING
) :: INT AS block_number
{% endif %}
{% if uses_receipts_by_hash %},
s.value :"TX_HASH" :: STRING AS tx_hash
{% endif %}
FROM
{{ source(
"bronze_streamline",
source_name ~ source_version
) }}
s
JOIN meta b
ON b.file_name = metadata$filename
AND b.partition_key = s.{{ partition_join_key }}
{% if balances %}
JOIN {{ ref('_block_ranges') }}
r
ON r.block_number = COALESCE(
s.value :"BLOCK_NUMBER" :: INT,
s.value :"block_number" :: INT
)
{% endif %}
WHERE
b.partition_key = s.{{ partition_join_key }}
AND DATA :error IS NULL
AND DATA IS NOT NULL
{% endmacro %}

16
package-lock.yml Normal file
View File

@ -0,0 +1,16 @@
packages:
- git: https://github.com/FlipsideCrypto/fsc-utils.git
revision: d3cf679e079f0cf06142de9386f215e55fe26b3b
- package: get-select/dbt_snowflake_query_tags
version: 2.5.0
- package: dbt-labs/dbt_external_tables
version: 0.8.2
- package: dbt-labs/dbt_utils
version: 1.0.0
- package: calogica/dbt_expectations
version: 0.8.5
- git: https://github.com/FlipsideCrypto/livequery-models.git
revision: b024188be4e9c6bc00ed77797ebdc92d351d620e
- package: calogica/dbt_date
version: 0.7.2
sha1_hash: ee54bf62c7933855efcc32b2a934c28f78b1799a

9
packages.yml Normal file
View File

@ -0,0 +1,9 @@
packages:
- git: https://github.com/FlipsideCrypto/fsc-utils.git
revision: v1.32.0
- package: get-select/dbt_snowflake_query_tags
version: [">=2.0.0", "<3.0.0"]
- package: dbt-labs/dbt_external_tables
version: 0.8.2
- package: dbt-labs/dbt_utils
version: 1.0.0

31
profiles.yml Normal file
View File

@ -0,0 +1,31 @@
fsc_ibc:
target: prod
outputs:
dev:
type: snowflake
account: "{{ env_var('ACCOUNT') }}"
role: "{{ env_var('ROLE') }}"
user: "{{ env_var('USER') }}"
password: "{{ env_var('PASSWORD') }}"
region: "{{ env_var('REGION') }}"
database: "{{ env_var('DATABASE') }}"
warehouse: "{{ env_var('WAREHOUSE') }}"
schema: SILVER
threads: 4
client_session_keep_alive: False
query_tag: curator
prod:
type: snowflake
account: "{{ env_var('ACCOUNT') }}"
role: "{{ env_var('ROLE') }}"
user: "{{ env_var('USER') }}"
password: "{{ env_var('PASSWORD') }}"
region: "{{ env_var('REGION') }}"
database: "{{ env_var('DATABASE') }}"
warehouse: "{{ env_var('WAREHOUSE') }}"
schema: SILVER
threads: 4
client_session_keep_alive: False
query_tag: curator
config:
send_anonymous_usage_stats: False

3
requirements.txt Normal file
View File

@ -0,0 +1,3 @@
dbt-core>=1.8,<1.9
dbt-snowflake>=1.8,<1.9
protobuf==4.25.3