mirror of
https://github.com/FlipsideCrypto/gitbook.git
synced 2026-02-06 10:47:06 +00:00
GitBook: [#368] Community Curation Edits
This commit is contained in:
parent
c12e492945
commit
ff11b8bf23
10
SUMMARY.md
10
SUMMARY.md
@ -5,7 +5,6 @@
|
||||
## Our Data
|
||||
|
||||
* [Access Flipside Data](our-data/access-flipside-data.md)
|
||||
* [Contribute to Flipside Data](our-data/contribute-to-flipside-data.md)
|
||||
* [Tables](our-data/tables/README.md)
|
||||
* [AAVE Tables](our-data/tables/aave-tables/README.md)
|
||||
* [Market Stats](our-data/tables/aave-tables/market-stats.md)
|
||||
@ -218,6 +217,15 @@
|
||||
* [Parameterized Queries](velocity/parameterized-queries.md)
|
||||
* [Query Editor Shortcuts](velocity/query-editor-shortcuts.md)
|
||||
|
||||
## Contribute to Our Data <a href="#contribute" id="contribute"></a>
|
||||
|
||||
* [Community Curation](contribute/contribute-to-flipside-data.md)
|
||||
* [Model Standards](contribute/model-standards/README.md)
|
||||
* [dbt Tips](contribute/model-standards/dbt-tips.md)
|
||||
* [Getting Started](contribute/getting-started/README.md)
|
||||
* [Contribution Workflow](contribute/getting-started/contribution-workflow.md)
|
||||
* [PR Checklist](contribute/pr-checklist.md)
|
||||
|
||||
## ShroomDK (SDK)
|
||||
|
||||
* [🍄 Get Started](shroomdk-sdk/get-started.md)
|
||||
|
||||
35
contribute/contribute-to-flipside-data.md
Normal file
35
contribute/contribute-to-flipside-data.md
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
description: A guide to data curation with Flipside.
|
||||
---
|
||||
|
||||
# Community Curation
|
||||
|
||||
## Open Source Models
|
||||
|
||||
Flipside Crypto is a data analytics and research firm that provides data-driven insights and analysis for the cryptocurrency market. Our data models are open source and available on GitHub, and anyone can contribute to them. We use [dbt](https://www.getdbt.com/), a command-line tool for working with data, to transform raw Blockchain data ingested through RPC nodes or an indexer into clean and easy-to-use data. These transformations are transparent to users, so they can see exactly how the data is being processed and can even contribute to the development of the models themselves. By making our data models open source and transparent, we are enabling our partners and users to collaborate with us and help improve the accuracy and reliability of our data.
|
||||
|
||||
The models for each chain that we offer can be reviewed and contributed to on our Github. Repositories follow the following naming convention for each blockchain: `<chain>-models`.
|
||||
|
||||
{% embed url="https://github.com/orgs/FlipsideCrypto/repositories" %}
|
||||
|
||||
## Contribute 
|
||||
|
||||
### Tags & Labels
|
||||
|
||||
See [How to Add Tags](../our-data/address-tags-and-labels/how-to-add-your-own-tags.md) on the address tags & labels data model page.
|
||||
|
||||
### SQL Models
|
||||
|
||||
The Flipside community uses [dbt](https://www.getdbt.com/) to model data in a [Snowflake](https://www.snowflake.com/) database. The following page on our [Analytics Stack for Community Curation](broken-reference) will go into further depth about the tech stack. If you are familiar with these tool, skip to [Getting Started](getting-started/) for instructions on access and setting up your dev environment.
|
||||
|
||||
#### What and why?
|
||||
|
||||
In its most simple form, dbt allows for reproducible SQL in the form of data models where the queries are used to build tables. In the analyst workflow, there is likely a lot of data transformation required to parse transaction and event data down to the activity of interest. You likely factor this work out into one or many CTEs. Think of building a data model via dbt as simply one more abstraction, factoring that transform into a table analysts can select from, rather than a single-use CTE.
|
||||
|
||||
If you find yourself reproducing code, maybe you use the same set of CTEs to filter transactions down to just one DEX or marketplace, that could be a perfect example of a community-built model. 
|
||||
|
||||
Community contributions do not need to be full-blown models. Protocols are constantly evolving and our models must adapt to these changes. If you spot something out-of-date or that needs attention, suggested revisions are welcome PRs.
|
||||
|
||||
#### Note
|
||||
|
||||
This guide will be updated as the community curation process evolves. Please feel free to provide feedback to us on Discord!
|
||||
65
contribute/getting-started/README.md
Normal file
65
contribute/getting-started/README.md
Normal file
@ -0,0 +1,65 @@
|
||||
# Getting Started
|
||||
|
||||
## Access
|
||||
|
||||
### **Snowflake**
|
||||
|
||||
Community curators are granted access to a dev environment for testing and development of a data model. A member of Flipside's analytics team will need to grant you access, so please ask in the [# 🌲 | community-curation](https://discord.com/channels/784442203187314689/1053086214615466095) channel on Discord something along the lines of:
|
||||
|
||||
> Hi , I’m interested in doing data curation for Flipside, could you give me snowflake access please? I’d like my username to be: `community_<insert_username>`
|
||||
|
||||
{% hint style="warning" %}
|
||||
Access to Snowflake is granted for the sole purpose of community curation and testing your models. This password is not to be shared with anyone. If you know someone who would like to contribute as well, we will credential them separately. If you would like to work with Flipside data in a Snowflake environment, please see the section on [Data Shares](broken-reference) and reach out separately.
|
||||
{% endhint %}
|
||||
|
||||
### **dbt cloud \[Optional]**
|
||||
|
||||
If you are unfamiliar with dbt, we suggest creating a free account to [dbt Cloud](https://cloud.getdbt.com/). dbt Labs has built an IDE for developing dbt models. Once the environment is set up with the proper credentials, connect to a fork of the [model repository](https://github.com/orgs/FlipsideCrypto/repositories) to begin editing or building you own. The cloud environment includes the option to preview the compiled SQL models so you can see output as you work. Additionally, the command line for running dbt includes built-in autocomplete for common dbt commands.
|
||||
|
||||
{% hint style="info" %}
|
||||
Note: if you are using dbt Cloud, you will need to fork the main repository and link your dbt Cloud environment to the fork.
|
||||
{% endhint %}
|
||||
|
||||
## **Software Setup**
|
||||
|
||||
### git
|
||||
|
||||
If you don't already have it installed, install git to your machine. [Here](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) are [two guides](https://github.com/git-guides/install-git) that may assist you.
|
||||
|
||||
You will also need a [Github](https://github.com/) account to collaborate on the model repositories. Github also has an official command line tool, [`gh`](https://github.com/cli/cli#installation), that is useful for interacting with Github repositories.
|
||||
|
||||
Once set up, clone a copy of the repository of choice to your machine and checkout a branch to begin making your changes. Branch name should follow the convention:`community/<branch_name>`.
|
||||
|
||||
* Ex: `git checkout -b community/my-new-model`
|
||||
|
||||
### Docker Environment
|
||||
|
||||
We have included a Dockerfile in eligible repositories to handle the installation of dbt on your behalf. 
|
||||
|
||||
1. Clone a repository, like [`ethereum-models`](https://github.com/FlipsideCrypto/ethereum-models) 
|
||||
2. [Install Docker](https://docs.docker.com/get-docker/) to your machine.
|
||||
3. Copy the details of [`.env.sample`](https://github.com/FlipsideCrypto/ethereum-models/blob/main/.env.sample) to a `.env` file with your credentials. The environment details, like account and database, will be pre-filled for you. All you should need to replace is the below with your previously provided username and password.
|
||||
* ```
|
||||
SF_USERNAME=<YOUR SNOWFLAKE USERNAME>
|
||||
SF_PASSWORD=<YOUR SNOWFLAKE PASSWORD>
|
||||
```
|
||||
4. Open a terminal window in the repository directory and run the command `make dbt-console`. If successful, a Docker container should spin up, install dbt, and open a console for you to run dbt commands. The container will read your `.env` file and should be connected to operate on the community curation database.
|
||||
5. Test your connection!
|
||||
1. Run `dbt debug` to check installation.
|
||||
2. Run `dbt deps` to install dependencies listed in the [`packages.yml`](https://github.com/FlipsideCrypto/ethereum-models/blob/main/packages.yml).
|
||||
3. Run `dbt test -s core__fact_blocks` to run a set of tests on the `<chain>.core.fact_blocks` model in the community curation database to check your connection and credentials.
|
||||
4. If you run into any errors, reach out for assistance in the [Discord channel](https://discord.com/channels/784442203187314689/1053086214615466095)!
|
||||
|
||||
#### Docker on Windows
|
||||
|
||||
Make is not recognized as a native command on Windows machines. As such, you will either need to install make for Windows, or use a Linux Terminal via [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||
|
||||
### Visual Studio Code
|
||||
|
||||
You can use any code editor, but [VS Code](https://code.visualstudio.com/) is our recommendation due to available extensions. All code that is to be suggested via PR must be formatted correctly using the [formatter linked here](https://marketplace.visualstudio.com/items?itemName=henriblancke.vscode-dbt-formatter).
|
||||
|
||||
The extension [dbt Power User](https://marketplace.visualstudio.com/items?itemName=innoverio.vscode-dbt-power-user) is also recommended.
|
||||
|
||||
|
||||
|
||||
You are now ready to create your first data contribution! Read on for an example contribution guide which includes some dbt basics, or review the [Model Standards](../model-standards/) for insight on how we structure our projects.
|
||||
@ -1,65 +1,11 @@
|
||||
---
|
||||
description: A guide to data curation with Flipside.
|
||||
---
|
||||
# Contribution Workflow
|
||||
|
||||
# Contribute to Flipside Data
|
||||
\[ this workflow will be actively updated and enhanced as we receive feedback ]
|
||||
|
||||
## Public Data Models
|
||||
|
||||
Links to the public Github repos for all data model source code can be found in the [table docs](tables/) for each blockchain/blockchain project.
|
||||
|
||||
## Contribute Tags & Labels
|
||||
|
||||
See [How to Add Tags](https://docs.flipsidecrypto.com/our-data/data-models/tags#how-to-add-tags) on the Tags data model page.
|
||||
|
||||
## Contribute Models with DBT
|
||||
|
||||
The Flipside community uses [DBT](https://docs.getdbt.com/) to model data. This section gives a complete overview of getting set up to contribute models with DBT. (Hint: if you know SQL, you're 95% of the way to knowing DBT.) [Avalanche](tables/avalanche-tables.md) and [Ethereum ](tables/ethereum-tables.md)repositories are setup for community curation.
|
||||
|
||||
#### Access
|
||||
|
||||
**Snowflake (ask in #community-curation)**
|
||||
|
||||
"Hi , I’m interested in doing data curation for Flipside, could you give me snowflake access please? I’d like my username to be: `community_<insert_username>"`
|
||||
|
||||
**dbt cloud \[Optional]**
|
||||
|
||||
Sign up on your own: [https://www.getdbt.com/signup/](https://www.getdbt.com/signup/)
|
||||
|
||||
#### **Software Setup**
|
||||
|
||||
* (Optional) Install dbt on your terminal:[ https://docs.getdbt.com/dbt-cli/install/overview](https://docs.getdbt.com/dbt-cli/install/overview)
|
||||
* Install Git (if you don’t have it):[ https://git-scm.com/book/en/v2/Getting-Started-Installing-Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
|
||||
* Download Docker desktop: [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/)
|
||||
* Install VSCode: [https://code.visualstudio.com/download](https://code.visualstudio.com/download)
|
||||
|
||||
#### **Project Setup**
|
||||
|
||||
1. Clone the project you want to work on from [FlipsideCrypto Github Org](https://github.com/FlipsideCrypto)
|
||||
* Example for Ethereum: `git clone https://github.com/FlipsideCrypto/ethereum-models.git`
|
||||
2. Open the project in VSCode
|
||||
3. Create a `.env` file in the root directory of your project (note: the .env file will not be committed to the source)
|
||||
* There is a provided `.env.sample` in the repository you may use as reference to create `.env`
|
||||
4. Start a new branch in the repository
|
||||
* Ensure you have the latest pulled down from the `main` branch
|
||||
* `git checkout main`
|
||||
* `git pull`
|
||||
* Branch name should follow convention: `community/<branch_name>`
|
||||
* Ex: `git checkout -b community/my-new-model`
|
||||
5. Start dbt console (this is where you will run your models to have them deployed to snowflake)
|
||||
* `make dbt-console`
|
||||
6. Within the console, run `dbt debug` to ensure all connections are working
|
||||
|
||||
**You are now ready to create your first data model!**
|
||||
|
||||
#### **Create and contribute your first model**
|
||||
**Create and contribute your first model**
|
||||
|
||||
1. Understand the modeling structure
|
||||
* Data models iterate through different “layers”. Generally speaking these are bronze, silver, and core.
|
||||
* Bronze is raw data layer
|
||||
* Silver is intermediate data modeling layer
|
||||
* Core is the presentation layer, aka what is exposed to the public
|
||||
* **Most of the time you will be working within the silver layer**
|
||||
* Data models iterate through different “layers”. Generally speaking these are bronze, silver, and core. How these layers interact is defined in [Model Standards](../model-standards/).
|
||||
2. Create model file within the proper layer and naming convention
|
||||
* Naming convention for file is `<layer name>__<table name>.sql`
|
||||
* Example for silver layer: **** `./models/silver/silver__my_new_table.sql`
|
||||
102
contribute/model-standards/README.md
Normal file
102
contribute/model-standards/README.md
Normal file
@ -0,0 +1,102 @@
|
||||
# Model Standards
|
||||
|
||||
## Style Standards
|
||||
|
||||
Our general model and style standards are derived from the best practices guide put forth by dbt Labs:
|
||||
|
||||
{% embed url="https://github.com/dbt-labs/corp/blob/main/dbt_style_guide.md" %}
|
||||
|
||||
All code that is submitted for a PR should be formatted according to [this dbt autoformatter](https://github.com/henriblancke/dbt-formatter), available as an extension on the VS Code extension marketplace.
|
||||
|
||||
The primary needs with a Flipside dbt model (will be explained in greater detail below):
|
||||
|
||||
1. [Star schema](./#dimensional-modeling-and-star-schema)
|
||||
2. [Tests and full documentation](./#model-properties)
|
||||
3. [Appropriate materialization](./#materialization)
|
||||
4. [Consistent column names](./#column-names)
|
||||
|
||||
## Dimensional Modeling & Star Schema
|
||||
|
||||
### Table Layers
|
||||
|
||||
#### Bronze
|
||||
|
||||
1. Blockchain data that has been indexed and piped into Snowflake under `chainwalkers` schema. This includes all core data characterizing what occurs on a chain (e.g. blocks, transactions, function calls, state changes, etc). Chainwalkers 2.0 decodes log data, but does no other transformation. 
|
||||
2. The only data that has been added to this is `_inserted_timestamp`. This has been added a requirement for all blockchains for efficiency and data integrity reasons.
|
||||
3. Any ingested data should be in the bronze layer.
|
||||
4. This layer is a view on the source, as such it is the only layer where `{{ source() }}` should be called and no transformations shall happen in bronze.
|
||||
|
||||
#### Silver
|
||||
|
||||
1. Filtered, cleaned and transformed data.
|
||||
2. Silver models should de-duplicate bronze, as chainwalkers may re-ingest a block at a later date.
|
||||
3. The models in this layer are incremental as to reduce compute time and cost.
|
||||
4. Fact and dimension tables which are completely decoded (where we can), are additive (no replacing data), deduped, have not been joined to other tables. We can expose silver tables in the public facing app as a gold view. 
|
||||
5. Can be a combination of data sources.
|
||||
|
||||
#### Gold (Core)
|
||||
|
||||
1. Curated models exposed to the public front-end.
|
||||
2. [Fact and dimension](https://docs.google.com/document/d/1sdtchIcnkzMyP0HLQkA-c3BtE4AWgNFjiGFnhX3YzAs/edit#heading=h.snh3fvv82vz1) **tables should never be combined in tables ie: no joining transactions and labels**. They can be combined in views, where the logic is done in the underlying queries. 
|
||||
3. Views that can ease the burden for analysts. The wide array of data-users include some new to SQL, new to analytics, and/or new to blockchain technology. Incorporating off-chain metadata, and even aggregations (e.g. daily summaries of key transactions) can help new users understand the ecosystem as a whole. 
|
||||
1. This also eases the burden on the underlying systems, and facilitates “Easy” bounty programs. 
|
||||
2. Facilitates learning for new users.
|
||||
3. Examples include: labels, transfers, event logs, swaps, liquidity provision/removal, daily balances of holders, prices and decimals, daily transfer summary.
|
||||
4. Models that accelerate analytics for a particular protocol (e.g. compound, uniswapv3, anchor, mirror schemas).
|
||||
|
||||
### Core Naming Conventions
|
||||
|
||||
#### 3 Primary Types of Views
|
||||
|
||||
All views should have a prefix of what type of data is within (based on [star schema](https://docs.google.com/document/d/1GxWCUBkMB55h1Qb8-t42JW7s2yJWe57nsElzE4gMSyc/edit)).
|
||||
|
||||
All names should\_be\_snake\_case. Abbreviations should be avoided when possible. Abbreviations should match an iso standard ie: currency or country names, etc.
|
||||
|
||||
1. Facts (`fact_`)
|
||||
1. Fact table contains measurements, metrics, and facts. Anything that is measurable, summing, averaging, anything over time.
|
||||
2. Dimensions (`dim_`)
|
||||
1. Dimension table is a companion to the fact table which contains descriptive attributes to be used as query constraining.
|
||||
3. Convenience Views (`ez_`)
|
||||
1. Convenience Views can combine facts and dimensions for reasons above. This should only be views in the gold layer.
|
||||
2. Where curation is doing more of the “lift” for the view.
|
||||
|
||||
## Model Properties
|
||||
|
||||
SQL models do not dance alone! Each `sql` file should have an accompanying `yml` of the same name. This is the model properties file and we use it, primarily, to document the model and columns and to test data in the model, at the column level.
|
||||
|
||||
{% embed url="https://docs.getdbt.com/reference/model-properties" %}
|
||||
|
||||
### Testing
|
||||
|
||||
The model properties allow us to test the model output to ensure the data flowing through meets expectations. Some common [generic tests](https://docs.getdbt.com/docs/build/tests#generic-tests) we use are:
|
||||
|
||||
* unique
|
||||
* not null
|
||||
* accepted values
|
||||
|
||||
There are also [packages](https://github.com/dbt-labs/dbt-utils) with utility functions that enhance the available tests to plug into a model, or you can write [custom tests](https://docs.getdbt.com/guides/best-practices/writing-custom-generic-tests) that apply to the model.
|
||||
|
||||
### Documentation
|
||||
|
||||
The `yml` file is also where tables and columns are [documented](https://docs.getdbt.com/docs/collaborate/documentation#adding-descriptions-to-your-project). These should be clear and concise for users to understand what data the model contains. As several columns might be used across models, we utilize the [doc block](https://docs.getdbt.com/reference/dbt-jinja-functions/doc) to define the column in a markdown file, rather than in each individual model property file.
|
||||
|
||||
## Materialization
|
||||
|
||||
dbt Models can be configured to run using one of a number of strategies. The two most common materializations in our models are [incremental](https://docs.getdbt.com/docs/build/incremental-models) and [view](https://docs.getdbt.com/terms/view).
|
||||
|
||||
### Incremental
|
||||
|
||||
True to the name, incremental models load data based on some incremental filter. In silver, just about every model will be incrementally built, based on `_inserted_timestamp`. 
|
||||
|
||||
### View
|
||||
|
||||
Views persist as SQL transformations without actually storing data, like a table. Every `core` (gold) model is a view on a silver incremental model. This is done to:
|
||||
|
||||
* drop any internal columns that need not be exposed to end users (like `_inserted_timestamp`)
|
||||
* rename the model to follow star schema
|
||||
|
||||
{% embed url="https://docs.getdbt.com/docs/build/materializations" %}
|
||||
|
||||
## Column Names
|
||||
|
||||
When building a model, be sure to check how columns are already named in the blockchain's data tables. Within a database, one model should not refer to `tx_id` while another `tx_hash`. A more comprehensive naming standard will be published soon™️.
|
||||
40
contribute/model-standards/dbt-tips.md
Normal file
40
contribute/model-standards/dbt-tips.md
Normal file
@ -0,0 +1,40 @@
|
||||
# dbt Tips
|
||||
|
||||
## Naming Convention
|
||||
|
||||
As noted in the model standards, we segment steps into bronze/silver/core layers and these are organized via schemas within a database. The directory structure within `models/` is not what determines database structure! Our models include a pre-written macro that handles this database organization based on the model name! A double underscore in the file-name will be parsed as a schema break.
|
||||
|
||||
So, `silver__blocks` compiles to a table `blocks` in schema `silver`.
|
||||
|
||||
## Table Materialization
|
||||
|
||||
A table is materialized as incremental via the model config. Flipside does this in each model by using a config block. There are three properties in the config block to note for setting an incremental model.
|
||||
|
||||
```
|
||||
{{ config(
|
||||
materialized = 'incremental',
|
||||
incremental_strategy = <strategy>,
|
||||
unique_key = <unique_key_column>,
|
||||
...
|
||||
) }}
|
||||
```
|
||||
|
||||
* [`materialized`](https://docs.getdbt.com/reference/resource-configs/materialized) sets the type of model
|
||||
* [`incremental_strategy`](https://docs.getdbt.com/docs/build/incremental-models#about-incremental\_strategy) determines the build approach. On Snowflake, the default is `merge` but you may also see `delete+insert`
|
||||
* [`unique_key`](https://docs.getdbt.com/reference/resource-configs/unique\_key) is a required parameter, regardless of incremental or table, but is used in the `incremental_strategy` to identify records.
|
||||
|
||||
When creating models with incremental __ materialization, we need to write an incremental logic within the model. It is important for the incremental logic to be based on `_inserted_timestamp` __ and not on the __ `block_timestamp`_._ This is important especially when the data encounters gaps on certain dates. This enables the model to heal itself because gaps are associated with __ `block_timestamp` __ and when they get-inserted later, they get captured by _`_inserted_timestamp`._
|
||||
|
||||
```sql
|
||||
{% raw %}
|
||||
{% if is_incremental() %}
|
||||
WHERE _inserted_timestamp >= (
|
||||
SELECT
|
||||
MAX(_inserted_timestamp) :: DATE - 1
|
||||
FROM
|
||||
{{ this }}
|
||||
)
|
||||
{% endif %}
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
20
contribute/pr-checklist.md
Normal file
20
contribute/pr-checklist.md
Normal file
@ -0,0 +1,20 @@
|
||||
# PR Checklist
|
||||
|
||||
## Checklist for Submitting a SQL Model PR <a href="#docs-internal-guid-73128d2e-7fff-b92b-1cf2-fe868168c9e9" id="docs-internal-guid-73128d2e-7fff-b92b-1cf2-fe868168c9e9"></a>
|
||||
|
||||
When done with your work and you contribution is ready for review, open a PR for the Flipside team to review. A set of reminders is provided below. 
|
||||
|
||||
{% hint style="info" %}
|
||||
A `yml` file must accompany every `sql` model with tests and documentation for the additions. If updating an existing model, be sure to update tests and documentation, where applicable. Notes on model properties are available on the [Model Standards](model-standards/) page.
|
||||
{% endhint %}
|
||||
|
||||
* [ ] Commit all changes of your working model to GitHub
|
||||
* [ ] Run `git merge main` to pull any changes that have been merged
|
||||
* [ ] Check for conflicts this may have caused, including up and down-stream dependencies.
|
||||
* [ ] Merge any and all final changes, ready for approval
|
||||
* [ ] Open a PR in GitHub with the following
|
||||
* [ ] Description of what is changing or being added.
|
||||
* [ ] The dbt command to run, default is likely `dbt run -s <model name>+`
|
||||
* [ ] Output of the dbt run showing success.
|
||||
* [ ] Output of a dbt test showing success.
|
||||
* [ ] Post in the Discord channel that your PR is ready for review and tag your Flipside contact to review.
|
||||
@ -168,7 +168,7 @@ _**More Detail with Screenshots: Outline of our Marinade Staking Transaction**_
|
||||
|
||||
* program: <mark style="background-color:red;">Marinade Finance - MarBmsSgKXdrN1egZf5sqe1TMai9K1rChYNDJgjq7aD</mark>
|
||||
|
||||
 (1).png>)
|
||||

|
||||
|
||||
* instruction 1: <mark style="background-color:red;">Deposit</mark>
|
||||
|
||||
@ -177,7 +177,7 @@ _**More Detail with Screenshots: Outline of our Marinade Staking Transaction**_
|
||||
|
||||
|
||||
|
||||
.png>)
|
||||

|
||||
|
||||
So now that we understand that Solana transactions are organized into programs, instructions, and inner instructions, it’s clearer to see how our transaction’s data show up in the solana.events table. For example you’ll see a lot of the information from Solscan in this JSON from the INNER\_INSTRUCTIONS column:
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user