mirror of
https://github.com/FlipsideCrypto/sdk.git
synced 2026-02-06 18:56:44 +00:00
Compare commits
31 Commits
python@v2.
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6d20b1c0cc | ||
|
|
4af0353a24 | ||
|
|
ae761cdf65 | ||
|
|
751f1adc70 | ||
|
|
2a5e4c6036 | ||
|
|
e147cf8dd4 | ||
|
|
43a3044883 | ||
|
|
d1393c6a4c | ||
|
|
8b98a4b924 | ||
|
|
c3e7d266fb | ||
|
|
c04aaa967f | ||
|
|
42b992900d | ||
|
|
351955b0d8 | ||
|
|
c7f4656df1 | ||
|
|
e3f7f56c9e | ||
|
|
a09422a9f6 | ||
|
|
3b15ab46a4 | ||
|
|
e3ce6d349f | ||
|
|
8b8d925f68 | ||
|
|
db669dd8d6 | ||
|
|
1c14811368 | ||
|
|
6a7efc55b7 | ||
|
|
1b1adbf8dc | ||
|
|
2271b58dde | ||
|
|
6f409ffddd | ||
|
|
5a496febae | ||
|
|
8c8e4c4b54 | ||
|
|
46aaa29ba4 | ||
|
|
67e903efb1 | ||
|
|
2c3d58ae90 | ||
|
|
901101965e |
2
.github/workflows/ci_python.yml
vendored
2
.github/workflows/ci_python.yml
vendored
@ -14,7 +14,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
python-version: ["3.7", "3.8", "3.9", "3.10"]
|
python-version: ["3.8", "3.9", "3.10"]
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@ -19,6 +19,7 @@ node_modules
|
|||||||
.output
|
.output
|
||||||
build/
|
build/
|
||||||
*.egg-info/
|
*.egg-info/
|
||||||
|
.history/
|
||||||
|
|
||||||
/build/
|
/build/
|
||||||
/public/build
|
/public/build
|
||||||
@ -32,3 +33,6 @@ r/shroomDK_0.1.0.tar.gz
|
|||||||
python-sdk-example.py
|
python-sdk-example.py
|
||||||
r/shroomDK/api_key.txt
|
r/shroomDK/api_key.txt
|
||||||
r/shroomDK/test_of_page2_issue.R
|
r/shroomDK/test_of_page2_issue.R
|
||||||
|
python/venv/
|
||||||
|
venv/
|
||||||
|
tokens.txt
|
||||||
|
|||||||
14
README.md
14
README.md
@ -5,20 +5,22 @@ Programmatic access to the most reliable & comprehensive blockchain data in Web3
|
|||||||
You've found yourself at the FlipsideCrypto SDK repository, the official SDK to programmatically query all of Flipside Crypto's data.
|
You've found yourself at the FlipsideCrypto SDK repository, the official SDK to programmatically query all of Flipside Crypto's data.
|
||||||
|
|
||||||
## 🧩 The Data
|
## 🧩 The Data
|
||||||
|
|
||||||
Flipside Crypto's Analytics Team has curated dozens of blockchain data sets with more being added each week. All tables available to query in Flipside's [Data Studio](https://flipsidecrypto.xyz) can be queried programmatically via our API and library of SDKs.
|
Flipside Crypto's Analytics Team has curated dozens of blockchain data sets with more being added each week. All tables available to query in Flipside's [Data Studio](https://flipsidecrypto.xyz) can be queried programmatically via our API and library of SDKs.
|
||||||
|
|
||||||
## 📖 Official Docs
|
## 📖 Official Docs
|
||||||
|
|
||||||
[https://docs.flipsidecrypto.com/flipside-api/get-started](https://docs.flipsidecrypto.com/flipside-api/get-started)
|
[https://docs.flipsidecrypto.com/flipside-api/get-started](https://docs.flipsidecrypto.com/flipside-api/get-started)
|
||||||
|
|
||||||
## 🗝 Want access? Genrate an API Key for Free
|
## 🗝 Want access? Genrate an API Key for Free
|
||||||
|
|
||||||
Get your [free API key here](https://flipsidecrypto.xyz/account/api-keys)
|
Get your [free API key here](https://flipsidecrypto.xyz/api-keys)
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
## SDKs
|
## SDKs
|
||||||
|
|
||||||
| Language | Version | Status |
|
| Language | Version | Status |
|
||||||
| ------------------------ | ------- | ---------------------------------------------------------------------------------- |
|
| ------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| ✅ [Python](./python/) | 2.0.7 | [](https://github.com/FlipsideCrypto/sdk/actions/workflows/ci_python.yml) |
|
| ✅ [Python](./python/) | 2.1.0 | [](https://github.com/FlipsideCrypto/sdk/actions/workflows/ci_python.yml) |
|
||||||
| ✅ [JS/TypeScript](./js) | 2.0.0 | [](https://github.com/FlipsideCrypto/sdk/actions/workflows/ci_js.yml)
|
| ✅ [JS/TypeScript](./js) | 2.0.1 | [](https://github.com/FlipsideCrypto/sdk/actions/workflows/ci_js.yml) |
|
||||||
| ✅ [R](./r/shroomDK/) | Under Construction | |
|
| ✅ [R](./r/shroomDK/) | 0.2.2 | [Available on CRAN](https://cran.r-project.org/web/packages/shroomDK/shroomDK.pdf) |
|
||||||
|
|||||||
@ -1,126 +1,126 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Intro to Flipside SDK/API: Getting Started\n",
|
"# Intro to Flipside SDK/API: Getting Started\n",
|
||||||
"\n",
|
"\n",
|
||||||
"<em>install Flipside with pip</em><br/>\n",
|
"<em>install Flipside with pip</em><br/>\n",
|
||||||
"`pip install flipside`"
|
"`pip install flipside`"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Import the package"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from flipside import Flipside"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"attachments": {},
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"Run your first query<br/>\n",
|
||||||
|
"<em>Remember to copy/paste your API Key from https://flipsidecrypto.xyz/api-keys below.</em>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import os\n",
|
||||||
|
"YOUR_API_KEY = os.environ.get(\"FLIPSIDE_API_KEY\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Invoke the ShroomDK class to create an instance of the SDK\n",
|
||||||
|
"sdk = Flipside(YOUR_API_KEY)\n",
|
||||||
|
"\n",
|
||||||
|
"# Run a query\n",
|
||||||
|
"query_result_set = sdk.query(\"\"\"\n",
|
||||||
|
" SELECT * FROM ethereum.core.ez_eth_transfers \n",
|
||||||
|
" WHERE \n",
|
||||||
|
" block_timestamp > GETDATE() - interval'90 days'\n",
|
||||||
|
" AND \n",
|
||||||
|
" (eth_from_address = lower('0xc2f41b3a1ff28fd2a6eee76ee12e51482fcfd11f')\n",
|
||||||
|
" OR eth_to_address = lower('0xc2f41b3a1ff28fd2a6eee76ee12e51482fcfd11f'))\n",
|
||||||
|
"\"\"\")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Query Result Set\n",
|
||||||
|
"\n",
|
||||||
|
"```python\n",
|
||||||
|
"class QueryResultSet(BaseModel):\n",
|
||||||
|
" query_id: Union[str, None] = Field(None, description=\"The server id of the query\")\n",
|
||||||
|
" status: str = Field(False, description=\"The status of the query (`PENDING`, `FINISHED`, `ERROR`)\")\n",
|
||||||
|
" columns: Union[List[str], None] = Field(None, description=\"The names of the columns in the result set\")\n",
|
||||||
|
" column_types: Union[List[str], None] = Field(None, description=\"The type of the columns in the result set\")\n",
|
||||||
|
" rows: Union[List[Any], None] = Field(None, description=\"The results of the query\")\n",
|
||||||
|
" run_stats: Union[QueryRunStats, None] = Field(\n",
|
||||||
|
" None,\n",
|
||||||
|
" description=\"Summary stats on the query run (i.e. the number of rows returned, the elapsed time, etc)\",\n",
|
||||||
|
" )\n",
|
||||||
|
" records: Union[List[Any], None] = Field(None, description=\"The results of the query transformed as an array of objects\")\n",
|
||||||
|
" error: Any\n",
|
||||||
|
"\n",
|
||||||
|
"```"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"## Explore the result set object\n",
|
||||||
|
"\n",
|
||||||
|
"records = query_result_set.records\n",
|
||||||
|
"\n",
|
||||||
|
"print(records[0])"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3.10.1 64-bit",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.1"
|
||||||
|
},
|
||||||
|
"orig_nbformat": 4,
|
||||||
|
"vscode": {
|
||||||
|
"interpreter": {
|
||||||
|
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
||||||
|
}
|
||||||
|
}
|
||||||
},
|
},
|
||||||
{
|
"nbformat": 4,
|
||||||
"cell_type": "markdown",
|
"nbformat_minor": 2
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Import the package"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"from flipside import Flipside"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"Run your first query<br/>\n",
|
|
||||||
"<em>Remember to copy/paste your API Key from https://flipsidecrypto.xyz/account/api-keys below.</em>"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"import os\n",
|
|
||||||
"YOUR_API_KEY = os.environ.get(\"FLIPSIDE_API_KEY\")\n",
|
|
||||||
"\n",
|
|
||||||
"# Invoke the ShroomDK class to create an instance of the SDK\n",
|
|
||||||
"sdk = Flipside(YOUR_API_KEY)\n",
|
|
||||||
"\n",
|
|
||||||
"# Run a query\n",
|
|
||||||
"query_result_set = sdk.query(\"\"\"\n",
|
|
||||||
" SELECT * FROM ethereum.core.ez_eth_transfers \n",
|
|
||||||
" WHERE \n",
|
|
||||||
" block_timestamp > GETDATE() - interval'90 days'\n",
|
|
||||||
" AND \n",
|
|
||||||
" (eth_from_address = lower('0xc2f41b3a1ff28fd2a6eee76ee12e51482fcfd11f')\n",
|
|
||||||
" OR eth_to_address = lower('0xc2f41b3a1ff28fd2a6eee76ee12e51482fcfd11f'))\n",
|
|
||||||
"\"\"\")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Query Result Set\n",
|
|
||||||
"\n",
|
|
||||||
"```python\n",
|
|
||||||
"class QueryResultSet(BaseModel):\n",
|
|
||||||
" query_id: Union[str, None] = Field(None, description=\"The server id of the query\")\n",
|
|
||||||
" status: str = Field(False, description=\"The status of the query (`PENDING`, `FINISHED`, `ERROR`)\")\n",
|
|
||||||
" columns: Union[List[str], None] = Field(None, description=\"The names of the columns in the result set\")\n",
|
|
||||||
" column_types: Union[List[str], None] = Field(None, description=\"The type of the columns in the result set\")\n",
|
|
||||||
" rows: Union[List[Any], None] = Field(None, description=\"The results of the query\")\n",
|
|
||||||
" run_stats: Union[QueryRunStats, None] = Field(\n",
|
|
||||||
" None,\n",
|
|
||||||
" description=\"Summary stats on the query run (i.e. the number of rows returned, the elapsed time, etc)\",\n",
|
|
||||||
" )\n",
|
|
||||||
" records: Union[List[Any], None] = Field(None, description=\"The results of the query transformed as an array of objects\")\n",
|
|
||||||
" error: Any\n",
|
|
||||||
"\n",
|
|
||||||
"```"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"## Explore the result set object\n",
|
|
||||||
"\n",
|
|
||||||
"records = query_result_set.records\n",
|
|
||||||
"\n",
|
|
||||||
"print(records[0])"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3.10.1 64-bit",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.10.1"
|
|
||||||
},
|
|
||||||
"orig_nbformat": 4,
|
|
||||||
"vscode": {
|
|
||||||
"interpreter": {
|
|
||||||
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 2
|
|
||||||
}
|
}
|
||||||
|
|||||||
@ -1,290 +1,290 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Intro to Flipside API/SDK: Getting Started\n",
|
"# Intro to Flipside API/SDK: Getting Started\n",
|
||||||
"\n",
|
"\n",
|
||||||
"<em>install Flipside with pip</em><br/>\n",
|
"<em>install Flipside with pip</em><br/>\n",
|
||||||
"`pip install flipside`"
|
"`pip install flipside`"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Import the package"
|
"Import the package"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from flipside import Flipside"
|
"from flipside import Flipside"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"Run your first query<br/>\n",
|
"Run your first query<br/>\n",
|
||||||
"<em>Remember to copy/paste your API Key from https://flipsidecrypto.xyz/account/api-keys below.</em>"
|
"<em>Remember to copy/paste your API Key from https://flipsidecrypto.xyz/api-keys below.</em>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"YOUR_API_KEY = os.environ.get(\"FLIPSIDE_API_KEY\")\n",
|
"YOUR_API_KEY = os.environ.get(\"FLIPSIDE_API_KEY\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Invoke the ShroomDK class to create an instance of the SDK\n",
|
"# Invoke the ShroomDK class to create an instance of the SDK\n",
|
||||||
"sdk = Flipside(YOUR_API_KEY)\n",
|
"sdk = Flipside(YOUR_API_KEY)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Run a query\n",
|
"# Run a query\n",
|
||||||
"xMETRIC_contract_address = '0x15848C9672e99be386807b9101f83A16EB017bb5'\n",
|
"xMETRIC_contract_address = '0x15848C9672e99be386807b9101f83A16EB017bb5'\n",
|
||||||
"\n",
|
"\n",
|
||||||
"query_result_set = sdk.query(f\"\"\"\n",
|
"query_result_set = sdk.query(f\"\"\"\n",
|
||||||
" SELECT count(distinct to_address) as recipient_count\n",
|
" SELECT count(distinct to_address) as recipient_count\n",
|
||||||
" FROM polygon.core.fact_token_transfers\n",
|
" FROM polygon.core.fact_token_transfers\n",
|
||||||
" WHERE block_timestamp > '2022-07-10T00:00:00'\n",
|
" WHERE block_timestamp > '2022-07-10T00:00:00'\n",
|
||||||
" AND contract_address = lower('{xMETRIC_contract_address}')\n",
|
" AND contract_address = lower('{xMETRIC_contract_address}')\n",
|
||||||
" AND to_address != lower('0x4b8923746a1D9943bbd408F477572762801efE4d')\n",
|
" AND to_address != lower('0x4b8923746a1D9943bbd408F477572762801efE4d')\n",
|
||||||
"\"\"\")\n"
|
"\"\"\")\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Query Result Set\n",
|
"### Query Result Set\n",
|
||||||
"\n",
|
"\n",
|
||||||
"```python\n",
|
"```python\n",
|
||||||
"class QueryResultSet(BaseModel):\n",
|
"class QueryResultSet(BaseModel):\n",
|
||||||
" query_id: Union[str, None] = Field(None, description=\"The server id of the query\")\n",
|
" query_id: Union[str, None] = Field(None, description=\"The server id of the query\")\n",
|
||||||
" status: str = Field(False, description=\"The status of the query (`PENDING`, `FINISHED`, `ERROR`)\")\n",
|
" status: str = Field(False, description=\"The status of the query (`PENDING`, `FINISHED`, `ERROR`)\")\n",
|
||||||
" columns: Union[List[str], None] = Field(None, description=\"The names of the columns in the result set\")\n",
|
" columns: Union[List[str], None] = Field(None, description=\"The names of the columns in the result set\")\n",
|
||||||
" column_types: Union[List[str], None] = Field(None, description=\"The type of the columns in the result set\")\n",
|
" column_types: Union[List[str], None] = Field(None, description=\"The type of the columns in the result set\")\n",
|
||||||
" rows: Union[List[Any], None] = Field(None, description=\"The results of the query\")\n",
|
" rows: Union[List[Any], None] = Field(None, description=\"The results of the query\")\n",
|
||||||
" run_stats: Union[QueryRunStats, None] = Field(\n",
|
" run_stats: Union[QueryRunStats, None] = Field(\n",
|
||||||
" None,\n",
|
" None,\n",
|
||||||
" description=\"Summary stats on the query run (i.e. the number of rows returned, the elapsed time, etc)\",\n",
|
" description=\"Summary stats on the query run (i.e. the number of rows returned, the elapsed time, etc)\",\n",
|
||||||
" )\n",
|
" )\n",
|
||||||
" records: Union[List[Any], None] = Field(None, description=\"The results of the query transformed as an array of objects\")\n",
|
" records: Union[List[Any], None] = Field(None, description=\"The results of the query transformed as an array of objects\")\n",
|
||||||
" error: Any\n",
|
" error: Any\n",
|
||||||
"\n",
|
"\n",
|
||||||
"```"
|
"```"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"## Explore the result set object\n",
|
"## Explore the result set object\n",
|
||||||
"\n",
|
"\n",
|
||||||
"records = query_result_set.records\n",
|
"records = query_result_set.records\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(records[0])\n",
|
"print(records[0])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"print(f\"There are {records[0]['recipient_count']} unique recipients of xMETRIC tokens.\")"
|
"print(f\"There are {records[0]['recipient_count']} unique recipients of xMETRIC tokens.\")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### xMETRIC Leaderboard\n",
|
"### xMETRIC Leaderboard\n",
|
||||||
"Retrieve the balance of every xMETRIC holder"
|
"Retrieve the balance of every xMETRIC holder"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"languageId": "sql"
|
"languageId": "sql"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"WITH sent_tokens AS (\n",
|
||||||
|
" SELECT \n",
|
||||||
|
" to_address as Participant,\n",
|
||||||
|
" sum(raw_amount/pow(10,18)) as xMETRIC\n",
|
||||||
|
" FROM polygon.core.fact_token_transfers\n",
|
||||||
|
" WHERE\n",
|
||||||
|
" block_timestamp::date > '2022-07-10'::date \n",
|
||||||
|
" AND contract_address = lower('0x15848C9672e99be386807b9101f83A16EB017bb5')\n",
|
||||||
|
" AND to_address != lower('0x4b8923746a1D9943bbd408F477572762801efE4d')\n",
|
||||||
|
" GROUP BY 1\n",
|
||||||
|
"),\n",
|
||||||
|
"burnt_tokens AS (\n",
|
||||||
|
" SELECT\n",
|
||||||
|
" to_address as Participant,\n",
|
||||||
|
" sum(raw_amount/pow(10,18)) as xMETRIC\n",
|
||||||
|
" FROM polygon.core.fact_token_transfers\n",
|
||||||
|
" WHERE\n",
|
||||||
|
" block_timestamp::date > '2022-07-10'::date \n",
|
||||||
|
" AND contract_address = lower('0x15848C9672e99be386807b9101f83A16EB017bb5')\n",
|
||||||
|
" AND to_address = lower('0x0000000000000000000000000000000000000000')\n",
|
||||||
|
" GROUP BY 1\n",
|
||||||
|
")\n",
|
||||||
|
"SELECT\n",
|
||||||
|
" sent_tokens.Participant as \"participant_addr\",\n",
|
||||||
|
" coalesce(sent_tokens.xmetric,0) - coalesce(burnt_tokens.xMETRIC,0) as \"balance\"\n",
|
||||||
|
"FROM sent_tokens \n",
|
||||||
|
"LEFT JOIN burnt_tokens ON sent_tokens.Participant = burnt_tokens.Participant\n",
|
||||||
|
"ORDER BY 2 DESC"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Load the sql query from a file\n",
|
||||||
|
"leaderboard_sql_query = open(\"./sql/xmetric_leaderboard.sql\", 'r').read()\n",
|
||||||
|
"\n",
|
||||||
|
"# Run the query with pagination\n",
|
||||||
|
"\n",
|
||||||
|
"page_number = 1\n",
|
||||||
|
"page_size = 10\n",
|
||||||
|
"\n",
|
||||||
|
"leaderboard_result_set = sdk.query(\n",
|
||||||
|
" leaderboard_sql_query, \n",
|
||||||
|
" page_size=page_size,\n",
|
||||||
|
" page_number=page_number)\n",
|
||||||
|
"\n",
|
||||||
|
"for record in leaderboard_result_set.records:\n",
|
||||||
|
" print(record)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Plot the xMETRIC LeaderBoard Results"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"full_leaderboard_result_set = sdk.query(leaderboard_sql_query)\n",
|
||||||
|
"\n",
|
||||||
|
"import pandas as pd\n",
|
||||||
|
"import plotly.express as px\n",
|
||||||
|
"\n",
|
||||||
|
"df = pd.DataFrame(full_leaderboard_result_set.records)\n",
|
||||||
|
"\n",
|
||||||
|
"fig = px.histogram(df, x=\"balance\", marginal=\"box\", hover_data=df.columns, nbins=200)\n",
|
||||||
|
"\n",
|
||||||
|
"fig.show()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Cross Chain xMETRIC User Exploration"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"vscode": {
|
||||||
|
"languageId": "sql"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"WITH xmetric_holders AS (\n",
|
||||||
|
" SELECT to_address as holder_addr\n",
|
||||||
|
" FROM polygon.core.fact_token_transfers\n",
|
||||||
|
" WHERE block_timestamp > '2022-07-10T00:00:00'\n",
|
||||||
|
" AND contract_address = lower('0x15848C9672e99be386807b9101f83A16EB017bb5')\n",
|
||||||
|
" AND to_address != lower('0x4b8923746a1D9943bbd408F477572762801efE4d')\n",
|
||||||
|
")\n",
|
||||||
|
"SELECT\n",
|
||||||
|
" token_name,\n",
|
||||||
|
" symbol,\n",
|
||||||
|
" count(distinct user_address) as num_holders,\n",
|
||||||
|
" median(usd_value_now) as median_usd_holdings\n",
|
||||||
|
"FROM ethereum.core.ez_current_balances\n",
|
||||||
|
"INNER JOIN xmetric_holders \n",
|
||||||
|
" ON ethereum.core.ez_current_balances.user_address = xmetric_holders.holder_addr\n",
|
||||||
|
"WHERE ethereum.core.ez_current_balances.usd_value_now > 0\n",
|
||||||
|
"GROUP BY 1, 2\n",
|
||||||
|
"ORDER BY 3 DESC\n",
|
||||||
|
"LIMIT 25"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Load the sql query from a file\n",
|
||||||
|
"xmetric_eth_holdings_sql_query = open(\"./sql/xmetric_eth_holdings.sql\", 'r').read()\n",
|
||||||
|
"\n",
|
||||||
|
"# Run the query\n",
|
||||||
|
"xmetric_eth_holdings_results = sdk.query(xmetric_eth_holdings_sql_query)\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Plot the results\n",
|
||||||
|
"df = pd.DataFrame(xmetric_eth_holdings_results.records)\n",
|
||||||
|
"\n",
|
||||||
|
"fig = px.bar(df, x=\"token_name\", y=\"num_holders\", hover_data=df.columns)\n",
|
||||||
|
"\n",
|
||||||
|
"fig.show()"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
},
|
],
|
||||||
"outputs": [],
|
"metadata": {
|
||||||
"source": [
|
"kernelspec": {
|
||||||
"WITH sent_tokens AS (\n",
|
"display_name": "Python 3.10.1 64-bit",
|
||||||
" SELECT \n",
|
"language": "python",
|
||||||
" to_address as Participant,\n",
|
"name": "python3"
|
||||||
" sum(raw_amount/pow(10,18)) as xMETRIC\n",
|
},
|
||||||
" FROM polygon.core.fact_token_transfers\n",
|
"language_info": {
|
||||||
" WHERE\n",
|
"codemirror_mode": {
|
||||||
" block_timestamp::date > '2022-07-10'::date \n",
|
"name": "ipython",
|
||||||
" AND contract_address = lower('0x15848C9672e99be386807b9101f83A16EB017bb5')\n",
|
"version": 3
|
||||||
" AND to_address != lower('0x4b8923746a1D9943bbd408F477572762801efE4d')\n",
|
},
|
||||||
" GROUP BY 1\n",
|
"file_extension": ".py",
|
||||||
"),\n",
|
"mimetype": "text/x-python",
|
||||||
"burnt_tokens AS (\n",
|
"name": "python",
|
||||||
" SELECT\n",
|
"nbconvert_exporter": "python",
|
||||||
" to_address as Participant,\n",
|
"pygments_lexer": "ipython3",
|
||||||
" sum(raw_amount/pow(10,18)) as xMETRIC\n",
|
"version": "3.10.1"
|
||||||
" FROM polygon.core.fact_token_transfers\n",
|
},
|
||||||
" WHERE\n",
|
"orig_nbformat": 4,
|
||||||
" block_timestamp::date > '2022-07-10'::date \n",
|
|
||||||
" AND contract_address = lower('0x15848C9672e99be386807b9101f83A16EB017bb5')\n",
|
|
||||||
" AND to_address = lower('0x0000000000000000000000000000000000000000')\n",
|
|
||||||
" GROUP BY 1\n",
|
|
||||||
")\n",
|
|
||||||
"SELECT\n",
|
|
||||||
" sent_tokens.Participant as \"participant_addr\",\n",
|
|
||||||
" coalesce(sent_tokens.xmetric,0) - coalesce(burnt_tokens.xMETRIC,0) as \"balance\"\n",
|
|
||||||
"FROM sent_tokens \n",
|
|
||||||
"LEFT JOIN burnt_tokens ON sent_tokens.Participant = burnt_tokens.Participant\n",
|
|
||||||
"ORDER BY 2 DESC"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Load the sql query from a file\n",
|
|
||||||
"leaderboard_sql_query = open(\"./sql/xmetric_leaderboard.sql\", 'r').read()\n",
|
|
||||||
"\n",
|
|
||||||
"# Run the query with pagination\n",
|
|
||||||
"\n",
|
|
||||||
"page_number = 1\n",
|
|
||||||
"page_size = 10\n",
|
|
||||||
"\n",
|
|
||||||
"leaderboard_result_set = sdk.query(\n",
|
|
||||||
" leaderboard_sql_query, \n",
|
|
||||||
" page_size=page_size,\n",
|
|
||||||
" page_number=page_number)\n",
|
|
||||||
"\n",
|
|
||||||
"for record in leaderboard_result_set.records:\n",
|
|
||||||
" print(record)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Plot the xMETRIC LeaderBoard Results"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"full_leaderboard_result_set = sdk.query(leaderboard_sql_query)\n",
|
|
||||||
"\n",
|
|
||||||
"import pandas as pd\n",
|
|
||||||
"import plotly.express as px\n",
|
|
||||||
"\n",
|
|
||||||
"df = pd.DataFrame(full_leaderboard_result_set.records)\n",
|
|
||||||
"\n",
|
|
||||||
"fig = px.histogram(df, x=\"balance\", marginal=\"box\", hover_data=df.columns, nbins=200)\n",
|
|
||||||
"\n",
|
|
||||||
"fig.show()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Cross Chain xMETRIC User Exploration"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {
|
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"languageId": "sql"
|
"interpreter": {
|
||||||
|
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"WITH xmetric_holders AS (\n",
|
|
||||||
" SELECT to_address as holder_addr\n",
|
|
||||||
" FROM polygon.core.fact_token_transfers\n",
|
|
||||||
" WHERE block_timestamp > '2022-07-10T00:00:00'\n",
|
|
||||||
" AND contract_address = lower('0x15848C9672e99be386807b9101f83A16EB017bb5')\n",
|
|
||||||
" AND to_address != lower('0x4b8923746a1D9943bbd408F477572762801efE4d')\n",
|
|
||||||
")\n",
|
|
||||||
"SELECT\n",
|
|
||||||
" token_name,\n",
|
|
||||||
" symbol,\n",
|
|
||||||
" count(distinct user_address) as num_holders,\n",
|
|
||||||
" median(usd_value_now) as median_usd_holdings\n",
|
|
||||||
"FROM ethereum.core.ez_current_balances\n",
|
|
||||||
"INNER JOIN xmetric_holders \n",
|
|
||||||
" ON ethereum.core.ez_current_balances.user_address = xmetric_holders.holder_addr\n",
|
|
||||||
"WHERE ethereum.core.ez_current_balances.usd_value_now > 0\n",
|
|
||||||
"GROUP BY 1, 2\n",
|
|
||||||
"ORDER BY 3 DESC\n",
|
|
||||||
"LIMIT 25"
|
|
||||||
]
|
|
||||||
},
|
},
|
||||||
{
|
"nbformat": 4,
|
||||||
"cell_type": "code",
|
"nbformat_minor": 2
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Load the sql query from a file\n",
|
|
||||||
"xmetric_eth_holdings_sql_query = open(\"./sql/xmetric_eth_holdings.sql\", 'r').read()\n",
|
|
||||||
"\n",
|
|
||||||
"# Run the query\n",
|
|
||||||
"xmetric_eth_holdings_results = sdk.query(xmetric_eth_holdings_sql_query)\n"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Plot the results\n",
|
|
||||||
"df = pd.DataFrame(xmetric_eth_holdings_results.records)\n",
|
|
||||||
"\n",
|
|
||||||
"fig = px.bar(df, x=\"token_name\", y=\"num_holders\", hover_data=df.columns)\n",
|
|
||||||
"\n",
|
|
||||||
"fig.show()"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"kernelspec": {
|
|
||||||
"display_name": "Python 3.10.1 64-bit",
|
|
||||||
"language": "python",
|
|
||||||
"name": "python3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"codemirror_mode": {
|
|
||||||
"name": "ipython",
|
|
||||||
"version": 3
|
|
||||||
},
|
|
||||||
"file_extension": ".py",
|
|
||||||
"mimetype": "text/x-python",
|
|
||||||
"name": "python",
|
|
||||||
"nbconvert_exporter": "python",
|
|
||||||
"pygments_lexer": "ipython3",
|
|
||||||
"version": "3.10.1"
|
|
||||||
},
|
|
||||||
"orig_nbformat": 4,
|
|
||||||
"vscode": {
|
|
||||||
"interpreter": {
|
|
||||||
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 2
|
|
||||||
}
|
}
|
||||||
|
|||||||
26
js/README.md
26
js/README.md
@ -7,7 +7,7 @@ Programmatic access to the most comprehensive blockchain data in Web3 🥳.
|
|||||||

|

|
||||||
<br>
|
<br>
|
||||||
<br>
|
<br>
|
||||||
You've found yourself at the Flipside Crypto JS/typescript sdk.
|
You've found yourself at the Flipside Crypto JS/typescript SDK.
|
||||||
<br>
|
<br>
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
@ -23,6 +23,10 @@ or if using npm
|
|||||||
npm install @flipsidecrypto/sdk
|
npm install @flipsidecrypto/sdk
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 🗝 Genrate an API Key for Free
|
||||||
|
|
||||||
|
Get your [free API key here](https://flipsidecrypto.xyz/api-keys)
|
||||||
|
|
||||||
## 🦾 Getting Started
|
## 🦾 Getting Started
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
@ -39,7 +43,7 @@ const myAddress = "0x....";
|
|||||||
|
|
||||||
// Create a query object for the `query.run` function to execute
|
// Create a query object for the `query.run` function to execute
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: `select nft_address, mint_price_eth, mint_price_usd from flipside_prod_db.ethereum_core.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
sql: `select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
||||||
maxAgeMinutes: 30,
|
maxAgeMinutes: 30,
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -48,10 +52,10 @@ const result: QueryResultSet = await flipside.query.run(query);
|
|||||||
|
|
||||||
// Iterate over the results
|
// Iterate over the results
|
||||||
result.records.map((record) => {
|
result.records.map((record) => {
|
||||||
const nftAddress = record.nft_address
|
const nftAddress = record.nft_address;
|
||||||
const mintPriceEth = record.mint_price_eth
|
const mintPriceEth = record.mint_price_eth;
|
||||||
const mintPriceUSD = = record.mint_price_usd
|
const mintPriceUSD = = record.mint_price_usd;
|
||||||
console.log(`address ${nftAddress} minted at a price of ${mintPrice} ETH or $${mintPriceUSD} USD`);
|
console.log(`address ${nftAddress} minted at a price of ${mintPriceEth} ETH or $${mintPriceUSD} USD`);
|
||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -99,7 +103,7 @@ Let's create a query to retrieve all NFTs minted by an address:
|
|||||||
const yourAddress = "<your_ethereum_address>";
|
const yourAddress = "<your_ethereum_address>";
|
||||||
|
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: `select nft_address, mint_price_eth, mint_price_usd from flipside_prod_db.ethereum_core.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
sql: `select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
||||||
maxAgeMinutes: 5,
|
maxAgeMinutes: 5,
|
||||||
cached: true,
|
cached: true,
|
||||||
timeoutMinutes: 15,
|
timeoutMinutes: 15,
|
||||||
@ -298,7 +302,7 @@ Set `maxAgeMinutes` to 30:
|
|||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: `select nft_address, mint_price_eth, mint_price_usd from flipside_prod_db.ethereum_core.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
sql: `select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
||||||
maxAgeMinutes: 30
|
maxAgeMinutes: 30
|
||||||
};
|
};
|
||||||
```
|
```
|
||||||
@ -309,13 +313,13 @@ If you would like to force a cache bust and re-execute the query. You have two o
|
|||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: `select nft_address, mint_price_eth, mint_price_usd from flipside_prod_db.ethereum_core.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
sql: `select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
||||||
maxAgeMinutes: 0
|
maxAgeMinutes: 0
|
||||||
};
|
};
|
||||||
|
|
||||||
// or:
|
// or:
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: `select nft_address, mint_price_eth, mint_price_usd from flipside_prod_db.ethereum_core.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
sql: `select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints where nft_to_address = LOWER('${myAddress}')`,
|
||||||
maxAgeMinutes: 30,
|
maxAgeMinutes: 30,
|
||||||
cache: false
|
cache: false
|
||||||
};
|
};
|
||||||
@ -358,4 +362,4 @@ Flipside does NOT charge for the number of bytes/records returned.
|
|||||||
|
|
||||||
### Client Side Request Requirements
|
### Client Side Request Requirements
|
||||||
|
|
||||||
All API Keys correspond to a list of hostnames. Client-side requests that do not originate from the corresponding hostname will fail. You may configure hostnames [here](https://flipsidecrypto.xyz/account/api-keys).
|
All API Keys correspond to a list of hostnames. Client-side requests that do not originate from the corresponding hostname will fail. You may configure hostnames [here](https://flipsidecrypto.xyz/api-keys).
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@flipsidecrypto/sdk",
|
"name": "@flipsidecrypto/sdk",
|
||||||
"version": "2.0.0",
|
"version": "2.1.0",
|
||||||
"description": "The official Flipside Crypto SDK",
|
"description": "The official Flipside Crypto SDK",
|
||||||
"main": "dist/src/index.js",
|
"main": "dist/src/index.js",
|
||||||
"types": "dist/src/index.d.ts",
|
"types": "dist/src/index.d.ts",
|
||||||
@ -32,6 +32,6 @@
|
|||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/eslint": "^8.4.8",
|
"@types/eslint": "^8.4.8",
|
||||||
"axios": "^0.27.2"
|
"xior": "^0.1.1"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -1,4 +1,4 @@
|
|||||||
import axios, { AxiosError, AxiosResponse } from "axios";
|
import xior, { XiorError as AxiosError, XiorResponse as AxiosResponse } from "xior";
|
||||||
import { ServerError, UnexpectedSDKError, UserError } from "./errors";
|
import { ServerError, UnexpectedSDKError, UserError } from "./errors";
|
||||||
import {
|
import {
|
||||||
CompassApiClient,
|
CompassApiClient,
|
||||||
@ -26,6 +26,7 @@ import {
|
|||||||
|
|
||||||
const PARSE_ERROR_MSG = "the api returned an error and there was a fatal client side error parsing that error msg";
|
const PARSE_ERROR_MSG = "the api returned an error and there was a fatal client side error parsing that error msg";
|
||||||
|
|
||||||
|
const axios = xior.create();
|
||||||
export class Api implements CompassApiClient {
|
export class Api implements CompassApiClient {
|
||||||
url: string;
|
url: string;
|
||||||
#baseUrl: string;
|
#baseUrl: string;
|
||||||
|
|||||||
@ -21,7 +21,7 @@ const runIt = async (): Promise<void> => {
|
|||||||
async function runWithSuccess(flipside: Flipside) {
|
async function runWithSuccess(flipside: Flipside) {
|
||||||
// Create a query object for the `query.run` function to execute
|
// Create a query object for the `query.run` function to execute
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.core.ez_nft_mints limit 100",
|
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints limit 100",
|
||||||
ttlMinutes: 10,
|
ttlMinutes: 10,
|
||||||
pageSize: 5,
|
pageSize: 5,
|
||||||
pageNumber: 1,
|
pageNumber: 1,
|
||||||
@ -41,7 +41,7 @@ async function runWithSuccess(flipside: Flipside) {
|
|||||||
async function runWithError(flipside: Flipside) {
|
async function runWithError(flipside: Flipside) {
|
||||||
// Create a query object for the `query.run` function to execute
|
// Create a query object for the `query.run` function to execute
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: "select nft_address mint_price_eth mint_price_usd from ethereum.core.ez_nft_mints limit 100",
|
sql: "select nft_address mint_price_eth mint_price_usd from ethereum.nft.ez_nft_mints limit 100",
|
||||||
ttlMinutes: 10,
|
ttlMinutes: 10,
|
||||||
pageSize: 5,
|
pageSize: 5,
|
||||||
pageNumber: 1,
|
pageNumber: 1,
|
||||||
@ -58,7 +58,7 @@ async function runWithError(flipside: Flipside) {
|
|||||||
async function pageThruResults(flipside: Flipside) {
|
async function pageThruResults(flipside: Flipside) {
|
||||||
// Create a query object for the `query.run` function to execute
|
// Create a query object for the `query.run` function to execute
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.core.ez_nft_mints limit 100",
|
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints limit 100",
|
||||||
ttlMinutes: 10,
|
ttlMinutes: 10,
|
||||||
pageSize: 25,
|
pageSize: 25,
|
||||||
pageNumber: 1,
|
pageNumber: 1,
|
||||||
@ -93,7 +93,7 @@ async function pageThruResults(flipside: Flipside) {
|
|||||||
async function getQueryRunSuccess(flipside: Flipside) {
|
async function getQueryRunSuccess(flipside: Flipside) {
|
||||||
// Create a query object for the `query.run` function to execute
|
// Create a query object for the `query.run` function to execute
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.core.ez_nft_mints limit 100",
|
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints limit 100",
|
||||||
ttlMinutes: 10,
|
ttlMinutes: 10,
|
||||||
pageSize: 5,
|
pageSize: 5,
|
||||||
pageNumber: 1,
|
pageNumber: 1,
|
||||||
@ -124,11 +124,11 @@ async function getQueryRunError(flipside: Flipside) {
|
|||||||
async function cancelQueryRun(flipside: Flipside) {
|
async function cancelQueryRun(flipside: Flipside) {
|
||||||
// Create a query object for the `query.run` function to execute
|
// Create a query object for the `query.run` function to execute
|
||||||
const query: Query = {
|
const query: Query = {
|
||||||
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.core.ez_nft_mints limit 100",
|
sql: "select nft_address, mint_price_eth, mint_price_usd from ethereum.nft.ez_nft_mints limit 999",
|
||||||
ttlMinutes: 10,
|
ttlMinutes: 10,
|
||||||
pageSize: 5,
|
pageSize: 5,
|
||||||
pageNumber: 1,
|
pageNumber: 1,
|
||||||
maxAgeMinutes: 10,
|
maxAgeMinutes: 0,
|
||||||
};
|
};
|
||||||
|
|
||||||
const queryRun = await flipside.query.createQueryRun(query);
|
const queryRun = await flipside.query.createQueryRun(query);
|
||||||
|
|||||||
69
js/yarn.lock
69
js/yarn.lock
@ -94,19 +94,6 @@ assertion-error@^1.1.0:
|
|||||||
resolved "https://registry.yarnpkg.com/assertion-error/-/assertion-error-1.1.0.tgz#e60b6b0e8f301bd97e5375215bda406c85118c0b"
|
resolved "https://registry.yarnpkg.com/assertion-error/-/assertion-error-1.1.0.tgz#e60b6b0e8f301bd97e5375215bda406c85118c0b"
|
||||||
integrity sha512-jgsaNduz+ndvGyFt3uSuWqvy4lCnIJiovtouQN5JZHOKCS2QuhEdbcQHFhVksz2N2U9hXJo8odG7ETyWlEeuDw==
|
integrity sha512-jgsaNduz+ndvGyFt3uSuWqvy4lCnIJiovtouQN5JZHOKCS2QuhEdbcQHFhVksz2N2U9hXJo8odG7ETyWlEeuDw==
|
||||||
|
|
||||||
asynckit@^0.4.0:
|
|
||||||
version "0.4.0"
|
|
||||||
resolved "https://registry.yarnpkg.com/asynckit/-/asynckit-0.4.0.tgz#c79ed97f7f34cb8f2ba1bc9790bcc366474b4b79"
|
|
||||||
integrity sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==
|
|
||||||
|
|
||||||
axios@^0.27.2:
|
|
||||||
version "0.27.2"
|
|
||||||
resolved "https://registry.yarnpkg.com/axios/-/axios-0.27.2.tgz#207658cc8621606e586c85db4b41a750e756d972"
|
|
||||||
integrity sha512-t+yRIyySRTp/wua5xEr+z1q60QmLq8ABsS5O9Me1AsE5dfKqgnCFzwiCZZ/cGNd1lq4/7akDWMxdhVlucjmnOQ==
|
|
||||||
dependencies:
|
|
||||||
follow-redirects "^1.14.9"
|
|
||||||
form-data "^4.0.0"
|
|
||||||
|
|
||||||
balanced-match@^1.0.0:
|
balanced-match@^1.0.0:
|
||||||
version "1.0.2"
|
version "1.0.2"
|
||||||
resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee"
|
resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee"
|
||||||
@ -177,13 +164,6 @@ color-name@~1.1.4:
|
|||||||
resolved "https://registry.yarnpkg.com/color-name/-/color-name-1.1.4.tgz#c2a09a87acbde69543de6f63fa3995c826c536a2"
|
resolved "https://registry.yarnpkg.com/color-name/-/color-name-1.1.4.tgz#c2a09a87acbde69543de6f63fa3995c826c536a2"
|
||||||
integrity sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==
|
integrity sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==
|
||||||
|
|
||||||
combined-stream@^1.0.8:
|
|
||||||
version "1.0.8"
|
|
||||||
resolved "https://registry.yarnpkg.com/combined-stream/-/combined-stream-1.0.8.tgz#c3d45a8b34fd730631a110a8a2520682b31d5a7f"
|
|
||||||
integrity sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==
|
|
||||||
dependencies:
|
|
||||||
delayed-stream "~1.0.0"
|
|
||||||
|
|
||||||
concat-map@0.0.1:
|
concat-map@0.0.1:
|
||||||
version "0.0.1"
|
version "0.0.1"
|
||||||
resolved "https://registry.yarnpkg.com/concat-map/-/concat-map-0.0.1.tgz#d8a96bd77fd68df7793a73036a3ba0d5405d477b"
|
resolved "https://registry.yarnpkg.com/concat-map/-/concat-map-0.0.1.tgz#d8a96bd77fd68df7793a73036a3ba0d5405d477b"
|
||||||
@ -212,11 +192,6 @@ deep-eql@^3.0.1:
|
|||||||
dependencies:
|
dependencies:
|
||||||
type-detect "^4.0.0"
|
type-detect "^4.0.0"
|
||||||
|
|
||||||
delayed-stream@~1.0.0:
|
|
||||||
version "1.0.0"
|
|
||||||
resolved "https://registry.yarnpkg.com/delayed-stream/-/delayed-stream-1.0.0.tgz#df3ae199acadfb7d440aaae0b29e2272b24ec619"
|
|
||||||
integrity sha1-3zrhmayt+31ECqrgsp4icrJOxhk=
|
|
||||||
|
|
||||||
emoji-regex@^8.0.0:
|
emoji-regex@^8.0.0:
|
||||||
version "8.0.0"
|
version "8.0.0"
|
||||||
resolved "https://registry.yarnpkg.com/emoji-regex/-/emoji-regex-8.0.0.tgz#e818fd69ce5ccfcb404594f842963bf53164cc37"
|
resolved "https://registry.yarnpkg.com/emoji-regex/-/emoji-regex-8.0.0.tgz#e818fd69ce5ccfcb404594f842963bf53164cc37"
|
||||||
@ -361,11 +336,6 @@ find-up@^5.0.0:
|
|||||||
locate-path "^6.0.0"
|
locate-path "^6.0.0"
|
||||||
path-exists "^4.0.0"
|
path-exists "^4.0.0"
|
||||||
|
|
||||||
follow-redirects@^1.14.9:
|
|
||||||
version "1.15.0"
|
|
||||||
resolved "https://registry.yarnpkg.com/follow-redirects/-/follow-redirects-1.15.0.tgz#06441868281c86d0dda4ad8bdaead2d02dca89d4"
|
|
||||||
integrity sha512-aExlJShTV4qOUOL7yF1U5tvLCB0xQuudbf6toyYA0E/acBNw71mvjFTnLaRp50aQaYocMR0a/RMMBIHeZnGyjQ==
|
|
||||||
|
|
||||||
foreground-child@^2.0.0:
|
foreground-child@^2.0.0:
|
||||||
version "2.0.0"
|
version "2.0.0"
|
||||||
resolved "https://registry.yarnpkg.com/foreground-child/-/foreground-child-2.0.0.tgz#71b32800c9f15aa8f2f83f4a6bd9bff35d861a53"
|
resolved "https://registry.yarnpkg.com/foreground-child/-/foreground-child-2.0.0.tgz#71b32800c9f15aa8f2f83f4a6bd9bff35d861a53"
|
||||||
@ -374,15 +344,6 @@ foreground-child@^2.0.0:
|
|||||||
cross-spawn "^7.0.0"
|
cross-spawn "^7.0.0"
|
||||||
signal-exit "^3.0.2"
|
signal-exit "^3.0.2"
|
||||||
|
|
||||||
form-data@^4.0.0:
|
|
||||||
version "4.0.0"
|
|
||||||
resolved "https://registry.yarnpkg.com/form-data/-/form-data-4.0.0.tgz#93919daeaf361ee529584b9b31664dc12c9fa452"
|
|
||||||
integrity sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==
|
|
||||||
dependencies:
|
|
||||||
asynckit "^0.4.0"
|
|
||||||
combined-stream "^1.0.8"
|
|
||||||
mime-types "^2.1.12"
|
|
||||||
|
|
||||||
fs.realpath@^1.0.0:
|
fs.realpath@^1.0.0:
|
||||||
version "1.0.0"
|
version "1.0.0"
|
||||||
resolved "https://registry.yarnpkg.com/fs.realpath/-/fs.realpath-1.0.0.tgz#1504ad2523158caa40db4a2787cb01411994ea4f"
|
resolved "https://registry.yarnpkg.com/fs.realpath/-/fs.realpath-1.0.0.tgz#1504ad2523158caa40db4a2787cb01411994ea4f"
|
||||||
@ -515,18 +476,6 @@ make-dir@^3.0.0:
|
|||||||
dependencies:
|
dependencies:
|
||||||
semver "^6.0.0"
|
semver "^6.0.0"
|
||||||
|
|
||||||
mime-db@1.52.0:
|
|
||||||
version "1.52.0"
|
|
||||||
resolved "https://registry.yarnpkg.com/mime-db/-/mime-db-1.52.0.tgz#bbabcdc02859f4987301c856e3387ce5ec43bf70"
|
|
||||||
integrity sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==
|
|
||||||
|
|
||||||
mime-types@^2.1.12:
|
|
||||||
version "2.1.35"
|
|
||||||
resolved "https://registry.yarnpkg.com/mime-types/-/mime-types-2.1.35.tgz#381a871b62a734450660ae3deee44813f70d959a"
|
|
||||||
integrity sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==
|
|
||||||
dependencies:
|
|
||||||
mime-db "1.52.0"
|
|
||||||
|
|
||||||
minimatch@^3.0.4, minimatch@^3.1.1:
|
minimatch@^3.0.4, minimatch@^3.1.1:
|
||||||
version "3.1.2"
|
version "3.1.2"
|
||||||
resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.2.tgz#19cd194bfd3e428f049a70817c038d89ab4be35b"
|
resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.2.tgz#19cd194bfd3e428f049a70817c038d89ab4be35b"
|
||||||
@ -710,6 +659,11 @@ test-exclude@^6.0.0:
|
|||||||
glob "^7.1.4"
|
glob "^7.1.4"
|
||||||
minimatch "^3.0.4"
|
minimatch "^3.0.4"
|
||||||
|
|
||||||
|
tiny-lru@^11.2.5:
|
||||||
|
version "11.2.5"
|
||||||
|
resolved "https://registry.yarnpkg.com/tiny-lru/-/tiny-lru-11.2.5.tgz#b138b99022aa26c567fa51a8dbf9e3e2959b2b30"
|
||||||
|
integrity sha512-JpqM0K33lG6iQGKiigcwuURAKZlq6rHXfrgeL4/I8/REoyJTGU+tEMszvT/oTRVHG2OiylhGDjqPp1jWMlr3bw==
|
||||||
|
|
||||||
tinypool@^0.1.3:
|
tinypool@^0.1.3:
|
||||||
version "0.1.3"
|
version "0.1.3"
|
||||||
resolved "https://registry.yarnpkg.com/tinypool/-/tinypool-0.1.3.tgz#b5570b364a1775fd403de5e7660b325308fee26b"
|
resolved "https://registry.yarnpkg.com/tinypool/-/tinypool-0.1.3.tgz#b5570b364a1775fd403de5e7660b325308fee26b"
|
||||||
@ -725,6 +679,11 @@ totalist@^3.0.0:
|
|||||||
resolved "https://registry.yarnpkg.com/totalist/-/totalist-3.0.0.tgz#4ef9c58c5f095255cdc3ff2a0a55091c57a3a1bd"
|
resolved "https://registry.yarnpkg.com/totalist/-/totalist-3.0.0.tgz#4ef9c58c5f095255cdc3ff2a0a55091c57a3a1bd"
|
||||||
integrity sha512-eM+pCBxXO/njtF7vdFsHuqb+ElbxqtI4r5EAvk6grfAFyJ6IvWlSkfZ5T9ozC6xWw3Fj1fGoSmrl0gUs46JVIw==
|
integrity sha512-eM+pCBxXO/njtF7vdFsHuqb+ElbxqtI4r5EAvk6grfAFyJ6IvWlSkfZ5T9ozC6xWw3Fj1fGoSmrl0gUs46JVIw==
|
||||||
|
|
||||||
|
ts-deepmerge@^7.0.0:
|
||||||
|
version "7.0.0"
|
||||||
|
resolved "https://registry.yarnpkg.com/ts-deepmerge/-/ts-deepmerge-7.0.0.tgz#ee824dc177d452603348c7e6f3b90223434a6b44"
|
||||||
|
integrity sha512-WZ/iAJrKDhdINv1WG6KZIGHrZDar6VfhftG1QJFpVbOYZMYJLJOvZOo1amictRXVdBXZIgBHKswMTXzElngprA==
|
||||||
|
|
||||||
type-detect@^4.0.0, type-detect@^4.0.5:
|
type-detect@^4.0.0, type-detect@^4.0.5:
|
||||||
version "4.0.8"
|
version "4.0.8"
|
||||||
resolved "https://registry.yarnpkg.com/type-detect/-/type-detect-4.0.8.tgz#7646fb5f18871cfbb7749e69bd39a6388eb7450c"
|
resolved "https://registry.yarnpkg.com/type-detect/-/type-detect-4.0.8.tgz#7646fb5f18871cfbb7749e69bd39a6388eb7450c"
|
||||||
@ -785,6 +744,14 @@ wrappy@1:
|
|||||||
resolved "https://registry.yarnpkg.com/wrappy/-/wrappy-1.0.2.tgz#b5243d8f3ec1aa35f1364605bc0d1036e30ab69f"
|
resolved "https://registry.yarnpkg.com/wrappy/-/wrappy-1.0.2.tgz#b5243d8f3ec1aa35f1364605bc0d1036e30ab69f"
|
||||||
integrity sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=
|
integrity sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=
|
||||||
|
|
||||||
|
xior@^0.1.1:
|
||||||
|
version "0.1.1"
|
||||||
|
resolved "https://registry.yarnpkg.com/xior/-/xior-0.1.1.tgz#285e996585e1c0ab42ee3aca3edcef5c0d06c4aa"
|
||||||
|
integrity sha512-GZwWfZ7DoZpNMsUCRaKJKAPgBcfLx8/IJM9NOlFJVF87PPRHHjLhhblWOOOxyLPgC3NJkT+fFHzxYlQlGbCbhw==
|
||||||
|
dependencies:
|
||||||
|
tiny-lru "^11.2.5"
|
||||||
|
ts-deepmerge "^7.0.0"
|
||||||
|
|
||||||
y18n@^5.0.5:
|
y18n@^5.0.5:
|
||||||
version "5.0.8"
|
version "5.0.8"
|
||||||
resolved "https://registry.yarnpkg.com/y18n/-/y18n-5.0.8.tgz#7f4934d0f7ca8c56f95314939ddcd2dd91ce1d55"
|
resolved "https://registry.yarnpkg.com/y18n/-/y18n-5.0.8.tgz#7f4934d0f7ca8c56f95314939ddcd2dd91ce1d55"
|
||||||
|
|||||||
@ -1 +1 @@
|
|||||||
2.0.7
|
2.1.0
|
||||||
0
python/log.txt
Normal file
0
python/log.txt
Normal file
@ -1,2 +1,3 @@
|
|||||||
pytest==6.2.4
|
pytest==6.2.4
|
||||||
freezegun==1.1.0
|
freezegun==1.1.0
|
||||||
|
requests-mock==1.11.0
|
||||||
@ -1,2 +1,2 @@
|
|||||||
pydantic==1.10.7
|
pydantic==2.10.0
|
||||||
requests==2.29.0
|
requests==2.32.0
|
||||||
@ -32,11 +32,10 @@ setup(
|
|||||||
"Intended Audience :: Developers", # Define that your audience are developers
|
"Intended Audience :: Developers", # Define that your audience are developers
|
||||||
"License :: OSI Approved :: MIT License", # Again, pick a license
|
"License :: OSI Approved :: MIT License", # Again, pick a license
|
||||||
"Operating System :: OS Independent",
|
"Operating System :: OS Independent",
|
||||||
"Programming Language :: Python :: 3.7",
|
|
||||||
"Programming Language :: Python :: 3.8",
|
"Programming Language :: Python :: 3.8",
|
||||||
"Programming Language :: Python :: 3.9",
|
"Programming Language :: Python :: 3.9",
|
||||||
"Programming Language :: Python :: 3.10",
|
"Programming Language :: Python :: 3.10",
|
||||||
],
|
],
|
||||||
dependency_links=[],
|
dependency_links=[],
|
||||||
python_requires=">=3.7",
|
python_requires=">=3.8",
|
||||||
)
|
)
|
||||||
|
|||||||
@ -18,11 +18,11 @@ class QueryRunTimeoutError(BaseError):
|
|||||||
Base class for all QueryRunTimeoutError errors.
|
Base class for all QueryRunTimeoutError errors.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, timeoutMinutes: Union[int, float, None] = None):
|
def __init__(self, timeoutSeconds: Union[int, float, None] = None):
|
||||||
if timeoutMinutes is None:
|
if timeoutSeconds is None:
|
||||||
self.message = f"QUERY_RUN_TIMEOUT_ERROR: your query has timed out."
|
self.message = f"QUERY_RUN_TIMEOUT_ERROR: your query has timed out."
|
||||||
else:
|
else:
|
||||||
self.message = f"QUERY_RUN_TIMEOUT_ERROR: your query has timed out after {timeoutMinutes} minutes."
|
self.message = f"QUERY_RUN_TIMEOUT_ERROR: your query has timed out after {timeoutSeconds} seconds."
|
||||||
super().__init__(self.message)
|
super().__init__(self.message)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -11,7 +11,7 @@ from .rpc import RPC
|
|||||||
|
|
||||||
API_BASE_URL = "https://api-v2.flipsidecrypto.xyz"
|
API_BASE_URL = "https://api-v2.flipsidecrypto.xyz"
|
||||||
|
|
||||||
SDK_VERSION = "2.0.4"
|
SDK_VERSION = "2.1.0"
|
||||||
SDK_PACKAGE = "python"
|
SDK_PACKAGE = "python"
|
||||||
|
|
||||||
DEFAULT_DATA_SOURCE = "snowflake-default"
|
DEFAULT_DATA_SOURCE = "snowflake-default"
|
||||||
|
|||||||
@ -39,21 +39,22 @@ class CompassQueryIntegration(object):
|
|||||||
def run(self, query: Query) -> QueryResultSet:
|
def run(self, query: Query) -> QueryResultSet:
|
||||||
query = self._set_query_defaults(query)
|
query = self._set_query_defaults(query)
|
||||||
|
|
||||||
|
# Use the default values from Query class when None
|
||||||
|
ttl_hours = int((query.ttl_minutes or 0) / 60)
|
||||||
|
max_age_minutes = query.max_age_minutes or 5 # default from Query class
|
||||||
|
retry_interval_seconds = query.retry_interval_seconds or 1 # default from Query class
|
||||||
|
|
||||||
create_query_run_params = CreateQueryRunRpcParams(
|
create_query_run_params = CreateQueryRunRpcParams(
|
||||||
resultTTLHours=int(query.ttl_minutes / 60)
|
resultTTLHours=ttl_hours,
|
||||||
if query.ttl_minutes
|
sql=query.sql or "",
|
||||||
else DEFAULTS.ttl_minutes,
|
maxAgeMinutes=max_age_minutes,
|
||||||
sql=query.sql,
|
|
||||||
maxAgeMinutes=query.max_age_minutes
|
|
||||||
if query.max_age_minutes
|
|
||||||
else DEFAULTS.max_age_minutes,
|
|
||||||
tags=Tags(
|
tags=Tags(
|
||||||
sdk_language="python",
|
sdk_language="python",
|
||||||
sdk_package=query.sdk_package,
|
sdk_package=query.sdk_package,
|
||||||
sdk_version=query.sdk_version,
|
sdk_version=query.sdk_version,
|
||||||
),
|
),
|
||||||
dataSource=query.data_source if query.data_source else "snowflake-default",
|
dataSource=query.data_source or "snowflake-default",
|
||||||
dataProvider=query.data_provider if query.data_provider else "flipside",
|
dataProvider=query.data_provider or "flipside",
|
||||||
)
|
)
|
||||||
created_query = self.rpc.create_query(create_query_run_params)
|
created_query = self.rpc.create_query(create_query_run_params)
|
||||||
if created_query.error:
|
if created_query.error:
|
||||||
@ -67,18 +68,16 @@ class CompassQueryIntegration(object):
|
|||||||
|
|
||||||
query_run = self._get_query_run_loop(
|
query_run = self._get_query_run_loop(
|
||||||
created_query.result.queryRun.id,
|
created_query.result.queryRun.id,
|
||||||
page_number=query.page_number,
|
page_number=query.page_number or 1,
|
||||||
page_size=query.page_size,
|
page_size=query.page_size or 100000,
|
||||||
timeout_minutes=query.timeout_minutes if query.timeout_minutes else 20,
|
timeout_minutes=query.timeout_minutes or 20,
|
||||||
retry_interval_seconds=query.retry_interval_seconds
|
retry_interval_seconds=retry_interval_seconds,
|
||||||
if query.retry_interval_seconds
|
|
||||||
else 1,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
query_result = self._get_query_results(
|
query_result = self._get_query_results(
|
||||||
query_run.id,
|
query_run.id,
|
||||||
page_number=query.page_number if query.page_number else 1,
|
page_number=query.page_number or 1,
|
||||||
page_size=query.page_size if query.page_size else 100000,
|
page_size=query.page_size or 100000,
|
||||||
)
|
)
|
||||||
|
|
||||||
return QueryResultSetBuilder(
|
return QueryResultSetBuilder(
|
||||||
|
|||||||
@ -23,4 +23,4 @@ class CancelQueryRunRpcResult(BaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class CancelQueryRunRpcResponse(RpcResponse):
|
class CancelQueryRunRpcResponse(RpcResponse):
|
||||||
result: Union[CancelQueryRunRpcResult, None]
|
result: Union[CancelQueryRunRpcResult, None] = None
|
||||||
|
|||||||
@ -11,23 +11,23 @@ class QueryRun(BaseModel):
|
|||||||
sqlStatementId: str
|
sqlStatementId: str
|
||||||
state: str
|
state: str
|
||||||
path: str
|
path: str
|
||||||
fileCount: Optional[int]
|
fileCount: Optional[int] = None
|
||||||
lastFileNumber: Optional[int]
|
lastFileNumber: Optional[int] = None
|
||||||
fileNames: Optional[str]
|
fileNames: Optional[str] = None
|
||||||
errorName: Optional[str]
|
errorName: Optional[str] = None
|
||||||
errorMessage: Optional[str]
|
errorMessage: Optional[str] = None
|
||||||
errorData: Optional[Any]
|
errorData: Optional[Any] = None
|
||||||
dataSourceQueryId: Optional[str]
|
dataSourceQueryId: Optional[str] = None
|
||||||
dataSourceSessionId: Optional[str]
|
dataSourceSessionId: Optional[str] = None
|
||||||
startedAt: Optional[str]
|
startedAt: Optional[str] = None
|
||||||
queryRunningEndedAt: Optional[str]
|
queryRunningEndedAt: Optional[str] = None
|
||||||
queryStreamingEndedAt: Optional[str]
|
queryStreamingEndedAt: Optional[str] = None
|
||||||
endedAt: Optional[str]
|
endedAt: Optional[str] = None
|
||||||
rowCount: Optional[int]
|
rowCount: Optional[int] = None
|
||||||
totalSize: Optional[int]
|
totalSize: Optional[int] = None
|
||||||
tags: Tags
|
tags: Tags
|
||||||
dataSourceId: str
|
dataSourceId: str
|
||||||
userId: str
|
userId: str
|
||||||
createdAt: str
|
createdAt: str
|
||||||
updatedAt: datetime
|
updatedAt: datetime
|
||||||
archivedAt: Optional[datetime]
|
archivedAt: Optional[datetime] = None
|
||||||
|
|||||||
@ -6,4 +6,4 @@ from pydantic import BaseModel
|
|||||||
class RpcError(BaseModel):
|
class RpcError(BaseModel):
|
||||||
code: int
|
code: int
|
||||||
message: str
|
message: str
|
||||||
data: Optional[Any]
|
data: Optional[Any] = None
|
||||||
|
|||||||
@ -8,5 +8,5 @@ from .rpc_error import RpcError
|
|||||||
class RpcResponse(BaseModel):
|
class RpcResponse(BaseModel):
|
||||||
jsonrpc: str
|
jsonrpc: str
|
||||||
id: int
|
id: int
|
||||||
result: Union[Optional[Dict[str, Any]], None]
|
result: Union[Optional[Dict[str, Any]], None] = None
|
||||||
error: Optional[RpcError]
|
error: Optional[RpcError] = None
|
||||||
|
|||||||
@ -10,7 +10,7 @@ class SqlStatement(BaseModel):
|
|||||||
id: str
|
id: str
|
||||||
statementHash: str
|
statementHash: str
|
||||||
sql: str
|
sql: str
|
||||||
columnMetadata: Optional[ColumnMetadata]
|
columnMetadata: Optional[ColumnMetadata] = None
|
||||||
userId: str
|
userId: str
|
||||||
tags: Tags
|
tags: Tags
|
||||||
createdAt: str
|
createdAt: str
|
||||||
|
|||||||
@ -5,6 +5,6 @@ from pydantic import BaseModel
|
|||||||
|
|
||||||
|
|
||||||
class Tags(BaseModel):
|
class Tags(BaseModel):
|
||||||
sdk_package: Optional[str]
|
sdk_package: Optional[str] = None
|
||||||
sdk_version: Optional[str]
|
sdk_version: Optional[str] = None
|
||||||
sdk_language: Optional[str]
|
sdk_language: Optional[str] = None
|
||||||
|
|||||||
@ -33,4 +33,4 @@ class CreateQueryRunRpcResult(BaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class CreateQueryRunRpcResponse(RpcResponse):
|
class CreateQueryRunRpcResponse(RpcResponse):
|
||||||
result: Union[CreateQueryRunRpcResult, None]
|
result: Union[CreateQueryRunRpcResult, None] = None
|
||||||
|
|||||||
@ -21,8 +21,8 @@ class GetQueryRunRpcRequest(RpcRequest):
|
|||||||
# Response
|
# Response
|
||||||
class GetQueryRunRpcResult(BaseModel):
|
class GetQueryRunRpcResult(BaseModel):
|
||||||
queryRun: QueryRun
|
queryRun: QueryRun
|
||||||
redirectedToQueryRun: Optional[QueryRun]
|
redirectedToQueryRun: Optional[QueryRun] = None
|
||||||
|
|
||||||
|
|
||||||
class GetQueryRunRpcResponse(RpcResponse):
|
class GetQueryRunRpcResponse(RpcResponse):
|
||||||
result: Union[GetQueryRunRpcResult, None]
|
result: Union[GetQueryRunRpcResult, None] = None
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
from typing import Any, Dict, List, Optional, Union
|
from typing import Any, Dict, List, Optional, Union
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import ConfigDict, BaseModel
|
||||||
|
|
||||||
from .core.page import Page
|
from .core.page import Page
|
||||||
from .core.page_stats import PageStats
|
from .core.page_stats import PageStats
|
||||||
@ -22,9 +22,13 @@ class Filter(BaseModel):
|
|||||||
like: Optional[Any] = None
|
like: Optional[Any] = None
|
||||||
in_: Optional[List[Any]] = None
|
in_: Optional[List[Any]] = None
|
||||||
notIn: Optional[List[Any]] = None
|
notIn: Optional[List[Any]] = None
|
||||||
|
# TODO[pydantic]: The following keys were removed: `fields`.
|
||||||
class Config:
|
# Check https://docs.pydantic.dev/dev-v2/migration/#changes-to-config for more information.
|
||||||
fields = {"in_": "in"}
|
model_config = ConfigDict(
|
||||||
|
alias_generator=None,
|
||||||
|
populate_by_name=True,
|
||||||
|
json_schema_extra={"fields": {"in_": "in"}}
|
||||||
|
)
|
||||||
|
|
||||||
def dict(self, *args, **kwargs) -> dict:
|
def dict(self, *args, **kwargs) -> dict:
|
||||||
kwargs.setdefault("exclude_none", True) # Exclude keys with None values
|
kwargs.setdefault("exclude_none", True) # Exclude keys with None values
|
||||||
@ -62,15 +66,15 @@ class GetQueryRunResultsRpcRequest(RpcRequest):
|
|||||||
|
|
||||||
# Response
|
# Response
|
||||||
class GetQueryRunResultsRpcResult(BaseModel):
|
class GetQueryRunResultsRpcResult(BaseModel):
|
||||||
columnNames: Union[Optional[List[str]], None]
|
columnNames: Union[Optional[List[str]], None] = None
|
||||||
columnTypes: Union[Optional[List[str]], None]
|
columnTypes: Union[Optional[List[str]], None] = None
|
||||||
rows: Union[List[Any], None]
|
rows: Union[List[Any], None] = None
|
||||||
page: Union[PageStats, None]
|
page: Union[PageStats, None] = None
|
||||||
sql: Union[str, None]
|
sql: Union[str, None] = None
|
||||||
format: Union[ResultFormat, None]
|
format: Union[ResultFormat, None] = None
|
||||||
originalQueryRun: QueryRun
|
originalQueryRun: QueryRun
|
||||||
redirectedToQueryRun: Union[QueryRun, None]
|
redirectedToQueryRun: Union[QueryRun, None] = None
|
||||||
|
|
||||||
|
|
||||||
class GetQueryRunResultsRpcResponse(RpcResponse):
|
class GetQueryRunResultsRpcResponse(RpcResponse):
|
||||||
result: Union[GetQueryRunResultsRpcResult, None]
|
result: Union[GetQueryRunResultsRpcResult, None] = None
|
||||||
|
|||||||
@ -23,4 +23,4 @@ class GetSqlStatemetnResult(BaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class GetSqlStatementResponse(RpcResponse):
|
class GetSqlStatementResponse(RpcResponse):
|
||||||
result: Union[GetSqlStatemetnResult, None]
|
result: Union[GetSqlStatemetnResult, None] = None
|
||||||
|
|||||||
@ -4,7 +4,7 @@ from pydantic import BaseModel, Field
|
|||||||
|
|
||||||
|
|
||||||
class Query(BaseModel):
|
class Query(BaseModel):
|
||||||
sql: str = Field(None, description="SQL query to execute")
|
sql: Optional[str] = Field(None, description="SQL query to execute")
|
||||||
ttl_minutes: Optional[int] = Field(
|
ttl_minutes: Optional[int] = Field(
|
||||||
None, description="The number of minutes to cache the query results"
|
None, description="The number of minutes to cache the query results"
|
||||||
)
|
)
|
||||||
@ -21,8 +21,8 @@ class Query(BaseModel):
|
|||||||
None,
|
None,
|
||||||
description="An override on the cache. A value of true will Re-Execute the query.",
|
description="An override on the cache. A value of true will Re-Execute the query.",
|
||||||
)
|
)
|
||||||
page_size: int = Field(None, description="The number of results to return per page")
|
page_size: Optional[int] = Field(None, description="The number of results to return per page")
|
||||||
page_number: int = Field(None, description="The page number to return")
|
page_number: Optional[int] = Field(None, description="The page number to return")
|
||||||
sdk_package: Optional[str] = Field(
|
sdk_package: Optional[str] = Field(
|
||||||
None, description="The SDK package used for the query"
|
None, description="The SDK package used for the query"
|
||||||
)
|
)
|
||||||
|
|||||||
@ -1,20 +1,21 @@
|
|||||||
|
from typing import Optional
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
|
||||||
class QueryDefaults(BaseModel):
|
class QueryDefaults(BaseModel):
|
||||||
ttl_minutes: int = Field(
|
ttl_minutes: Optional[int] = Field(
|
||||||
None, description="The number of minutes to cache the query results"
|
None, description="The number of minutes to cache the query results"
|
||||||
)
|
)
|
||||||
max_age_minutes: int = Field(
|
max_age_minutes: Optional[int] = Field(
|
||||||
None,
|
None,
|
||||||
description="The max age of query results to accept before deciding to run a query again",
|
description="The max age of query results to accept before deciding to run a query again",
|
||||||
)
|
)
|
||||||
cached: bool = Field(False, description="Whether or not to cache the query results")
|
cached: bool = Field(False, description="Whether or not to cache the query results")
|
||||||
timeout_minutes: int = Field(
|
timeout_minutes: Optional[int] = Field(
|
||||||
None, description="The number of minutes to timeout the query"
|
None, description="The number of minutes to timeout the query"
|
||||||
)
|
)
|
||||||
retry_interval_seconds: float = Field(
|
retry_interval_seconds: Optional[float] = Field(
|
||||||
None, description="The number of seconds to wait before retrying the query"
|
None, description="The number of seconds to wait before retrying the query"
|
||||||
)
|
)
|
||||||
page_size: int = Field(None, description="The number of results to return per page")
|
page_size: Optional[int] = Field(None, description="The number of results to return per page")
|
||||||
page_number: int = Field(None, description="The page number to return")
|
page_number: Optional[int] = Field(None, description="The page number to return")
|
||||||
|
|||||||
@ -10,7 +10,7 @@ class QueryResultSet(BaseModel):
|
|||||||
query_id: Union[str, None] = Field(None, description="The server id of the query")
|
query_id: Union[str, None] = Field(None, description="The server id of the query")
|
||||||
|
|
||||||
status: str = Field(
|
status: str = Field(
|
||||||
False, description="The status of the query (`PENDING`, `FINISHED`, `ERROR`)"
|
"PENDING", description="The status of the query (`PENDING`, `FINISHED`, `ERROR`)"
|
||||||
)
|
)
|
||||||
columns: Union[List[str], None] = Field(
|
columns: Union[List[str], None] = Field(
|
||||||
None, description="The names of the columns in the result set"
|
None, description="The names of the columns in the result set"
|
||||||
@ -29,4 +29,4 @@ class QueryResultSet(BaseModel):
|
|||||||
page: Union[PageStats, None] = Field(
|
page: Union[PageStats, None] = Field(
|
||||||
None, description="Summary of page stats for this query result set"
|
None, description="Summary of page stats for this query result set"
|
||||||
)
|
)
|
||||||
error: Any
|
error: Any = None
|
||||||
|
|||||||
@ -1,40 +1,41 @@
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
|
||||||
class QueryRunStats(BaseModel):
|
class QueryRunStats(BaseModel):
|
||||||
started_at: datetime = Field(None, description="The start time of the query run.")
|
started_at: Optional[datetime] = Field(None, description="The start time of the query run.")
|
||||||
ended_at: datetime = Field(None, description="The end time of the query run.")
|
ended_at: Optional[datetime] = Field(None, description="The end time of the query run.")
|
||||||
query_exec_started_at: datetime = Field(
|
query_exec_started_at: Optional[datetime] = Field(
|
||||||
None, description="The start time of query execution."
|
None, description="The start time of query execution."
|
||||||
)
|
)
|
||||||
query_exec_ended_at: datetime = Field(
|
query_exec_ended_at: Optional[datetime] = Field(
|
||||||
None, description="The end time of query execution."
|
None, description="The end time of query execution."
|
||||||
)
|
)
|
||||||
streaming_started_at: datetime = Field(
|
streaming_started_at: Optional[datetime] = Field(
|
||||||
None, description="The start time of streaming query results."
|
None, description="The start time of streaming query results."
|
||||||
)
|
)
|
||||||
streaming_ended_at: datetime = Field(
|
streaming_ended_at: Optional[datetime] = Field(
|
||||||
None, description="The end time of streaming query results."
|
None, description="The end time of streaming query results."
|
||||||
)
|
)
|
||||||
elapsed_seconds: int = Field(
|
elapsed_seconds: Optional[int] = Field(
|
||||||
None,
|
None,
|
||||||
description="The number of seconds elapsed between the start and end times.",
|
description="The number of seconds elapsed between the start and end times.",
|
||||||
)
|
)
|
||||||
queued_seconds: int = Field(
|
queued_seconds: Optional[int] = Field(
|
||||||
None,
|
None,
|
||||||
description="The number of seconds elapsed between when the query was created and when execution on the data source began.",
|
description="The number of seconds elapsed between when the query was created and when execution on the data source began.",
|
||||||
)
|
)
|
||||||
streaming_seconds: int = Field(
|
streaming_seconds: Optional[int] = Field(
|
||||||
None,
|
None,
|
||||||
description="The number of seconds elapsed between when the query execution completed and results were fully streamed to Flipside's servers.",
|
description="The number of seconds elapsed between when the query execution completed and results were fully streamed to Flipside's servers.",
|
||||||
)
|
)
|
||||||
query_exec_seconds: int = Field(
|
query_exec_seconds: Optional[int] = Field(
|
||||||
None,
|
None,
|
||||||
description="The number of seconds elapsed between when the query execution started and when it completed on the data source.",
|
description="The number of seconds elapsed between when the query execution started and when it completed on the data source.",
|
||||||
)
|
)
|
||||||
record_count: int = Field(
|
record_count: Optional[int] = Field(
|
||||||
None, description="The number of records returned by the query."
|
None, description="The number of records returned by the query."
|
||||||
)
|
)
|
||||||
bytes: int = Field(None, description="The number of bytes returned by the query.")
|
bytes: Optional[int] = Field(None, description="The number of bytes returned by the query.")
|
||||||
|
|||||||
@ -6,4 +6,4 @@ from pydantic import BaseModel
|
|||||||
class SleepConfig(BaseModel):
|
class SleepConfig(BaseModel):
|
||||||
attempts: int
|
attempts: int
|
||||||
timeout_minutes: Union[int, float]
|
timeout_minutes: Union[int, float]
|
||||||
interval_seconds: Optional[float]
|
interval_seconds: Optional[float] = None
|
||||||
|
|||||||
@ -1,4 +1,6 @@
|
|||||||
import json
|
import json
|
||||||
|
import pytest
|
||||||
|
import requests_mock
|
||||||
|
|
||||||
from ....errors import (
|
from ....errors import (
|
||||||
ApiError,
|
ApiError,
|
||||||
@ -20,6 +22,12 @@ from ...utils.mock_data.get_sql_statement import get_sql_statement_response
|
|||||||
SDK_VERSION = "1.0.2"
|
SDK_VERSION = "1.0.2"
|
||||||
SDK_PACKAGE = "python"
|
SDK_PACKAGE = "python"
|
||||||
|
|
||||||
|
# Add the fixture decorator
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def requests_mock_fixture():
|
||||||
|
with requests_mock.Mocker() as m:
|
||||||
|
yield m
|
||||||
|
|
||||||
|
|
||||||
def get_rpc():
|
def get_rpc():
|
||||||
return RPC("https://test.com", "api_key")
|
return RPC("https://test.com", "api_key")
|
||||||
|
|||||||
@ -1,4 +1,6 @@
|
|||||||
import json
|
import json
|
||||||
|
import pytest
|
||||||
|
import requests_mock
|
||||||
|
|
||||||
from ..errors.server_error import ServerError
|
from ..errors.server_error import ServerError
|
||||||
from ..models import Query, QueryStatus
|
from ..models import Query, QueryStatus
|
||||||
@ -14,6 +16,11 @@ from .utils.mock_data.create_query_run import create_query_run_response
|
|||||||
from .utils.mock_data.get_query_results import get_query_results_response
|
from .utils.mock_data.get_query_results import get_query_results_response
|
||||||
from .utils.mock_data.get_query_run import get_query_run_response
|
from .utils.mock_data.get_query_run import get_query_run_response
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def requests_mock_fixture():
|
||||||
|
with requests_mock.Mocker() as m:
|
||||||
|
yield m
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Test Defaults
|
Test Defaults
|
||||||
"""
|
"""
|
||||||
|
|||||||
0
r/shroomDK/.Rhistory
Normal file
0
r/shroomDK/.Rhistory
Normal file
@ -1,7 +1,7 @@
|
|||||||
Package: shroomDK
|
Package: shroomDK
|
||||||
Type: Package
|
Type: Package
|
||||||
Title: Accessing the Flipside Crypto ShroomDK API
|
Title: Accessing the Flipside Crypto ShroomDK API
|
||||||
Version: 0.2.0
|
Version: 0.3.0
|
||||||
Author: Carlos Mercado
|
Author: Carlos Mercado
|
||||||
Maintainer: Carlos Mercado <carlos.mercado@flipsidecrypto.com>
|
Maintainer: Carlos Mercado <carlos.mercado@flipsidecrypto.com>
|
||||||
Description: Programmatic access to Flipside Crypto data via the Compass RPC API: <https://api-docs.flipsidecrypto.xyz/>. As simple as auto_paginate_query() but with core functions as needed for troubleshooting. Note, 0.1.1 support deprecated 2023-05-31.
|
Description: Programmatic access to Flipside Crypto data via the Compass RPC API: <https://api-docs.flipsidecrypto.xyz/>. As simple as auto_paginate_query() but with core functions as needed for troubleshooting. Note, 0.1.1 support deprecated 2023-05-31.
|
||||||
|
|||||||
@ -3,12 +3,14 @@ library(httr)
|
|||||||
|
|
||||||
#' Auto Paginate Queries
|
#' Auto Paginate Queries
|
||||||
#'
|
#'
|
||||||
#' @description Grabs up to maxrows in a query by going through each page to download one at a time.
|
#' @description Intelligently grab up to 1 Gigabyte of data from a SQL query including automatic pagination and cleaning.
|
||||||
#'
|
#'
|
||||||
#' @param query The SQL query to pass to ShroomDK
|
#' @param query The SQL query to pass to ShroomDK
|
||||||
#' @param api_key ShroomDK API key.
|
#' @param api_key ShroomDK API key.
|
||||||
#' @param page_size Default 1000. May return error if page_size is tool large and data to exceed 30MB.
|
#' @param page_size Default 25,000. May return error if `page_size` is too large (if page exceeds 30MB or entire query >1GB). Ignored if results fit on 1 page of < 15 Mb of data.
|
||||||
#' @param page_count Default 1. How many pages, of page_size rows each, to read.
|
#' @param page_count How many pages, of page_size rows each, to read. Default NULL calculates the ceiling (# rows in results / page_size). Ignored if results fit on 1 page of < 15 Mb of data.
|
||||||
|
#' @param data_source Where data is sourced, including specific computation warehouse. Default `"snowflake-default"`. Non default data sources may require registration of api_key to allowlist.
|
||||||
|
#' @param data_provider Who provides data, Default `"flipside"`. Non default data providers may require registration of api_key to allowlist.
|
||||||
#' @param api_url default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.
|
#' @param api_url default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.
|
||||||
#' @return data frame of up to `page_size * page_count` rows, see ?clean_query for more details on column classes.
|
#' @return data frame of up to `page_size * page_count` rows, see ?clean_query for more details on column classes.
|
||||||
#' @import jsonlite httr
|
#' @import jsonlite httr
|
||||||
@ -18,18 +20,72 @@ library(httr)
|
|||||||
#' pull_data <- auto_paginate_query("
|
#' pull_data <- auto_paginate_query("
|
||||||
#' SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10001",
|
#' SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10001",
|
||||||
#' api_key = readLines("api_key.txt"),
|
#' api_key = readLines("api_key.txt"),
|
||||||
#' page_count = 10)
|
#' page_size = 9000, # ends up ignored because results fit on 1 page.
|
||||||
|
#' page_count = NULL)
|
||||||
#' }
|
#' }
|
||||||
auto_paginate_query <- function(query, api_key, page_size = 1000,
|
auto_paginate_query <- function(query, api_key,
|
||||||
page_count = 1,
|
page_size = 25000,
|
||||||
|
page_count = NULL,
|
||||||
|
data_source = "snowflake-default",
|
||||||
|
data_provider = "flipside",
|
||||||
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"){
|
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"){
|
||||||
|
|
||||||
qtoken <- create_query_token(query = query,
|
qtoken <- create_query_token(query = query,
|
||||||
api_key = api_key,
|
api_key = api_key,
|
||||||
ttl = 1,
|
ttl = 1,
|
||||||
mam = 10,
|
mam = 10,
|
||||||
|
data_source = data_source,
|
||||||
|
data_provider = data_provider,
|
||||||
api_url = api_url)
|
api_url = api_url)
|
||||||
|
|
||||||
|
query_run_id <- qtoken$result$queryRequest$queryRunId
|
||||||
|
status_check_done <- FALSE
|
||||||
|
warn_flag <- FALSE
|
||||||
|
|
||||||
|
while (!status_check_done) {
|
||||||
|
query_status <- get_query_status(query_run_id = query_run_id, api_key = api_key, api_url = api_url)
|
||||||
|
query_state <- query_status$result$queryRun$state
|
||||||
|
|
||||||
|
failed_to_get_a_state = 0
|
||||||
|
|
||||||
|
if(failed_to_get_a_state > 2){
|
||||||
|
warning("Query has failed state more than twice, consider cancel_query(), exiting now")
|
||||||
|
stop("Exited due to 3+ Failed States")
|
||||||
|
}
|
||||||
|
|
||||||
|
if(length(query_state) == 0){
|
||||||
|
warning("Query failed to return a state, trying again")
|
||||||
|
Sys.sleep(5)
|
||||||
|
failed_to_get_a_state = failed_to_get_a_state + 1
|
||||||
|
} else {
|
||||||
|
if(query_state == "QUERY_STATE_SUCCESS"){
|
||||||
|
status_check_done <- TRUE
|
||||||
|
result_num_rows <- query_status$result$queryRun$rowCount
|
||||||
|
result_file_size <- as.numeric(query_status$result$queryRun$totalSize)
|
||||||
|
next()
|
||||||
|
} else if(query_state == "QUERY_STATE_FAILED"){
|
||||||
|
status_check_done <- TRUE
|
||||||
|
stop(query_status$result$queryRun$errorMessage)
|
||||||
|
} else if(query_state == "QUERY_STATE_CANCELED"){
|
||||||
|
status_check_done <- TRUE
|
||||||
|
stop("This query was canceled, typically by cancel_query()")
|
||||||
|
} else if(query_state != "QUERY_STATE_SUCCESS"){
|
||||||
|
warning(
|
||||||
|
paste0("Query in process, checking again in 10 seconds.",
|
||||||
|
"To cancel use: cancel_query() with your ID: \n", query_run_id)
|
||||||
|
)
|
||||||
|
Sys.sleep(10)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if(is.null(page_count)){
|
||||||
|
page_count <- ceiling(result_num_rows / page_size)
|
||||||
|
}
|
||||||
|
|
||||||
|
# if the result is large (estimated at 15+ Mb) paginate
|
||||||
|
# otherwise grab it all at once.
|
||||||
|
if(result_file_size > 15000000){
|
||||||
res <- lapply(1:page_count, function(i){
|
res <- lapply(1:page_count, function(i){
|
||||||
temp_page <- get_query_from_token(qtoken$result$queryRequest$queryRunId,
|
temp_page <- get_query_from_token(qtoken$result$queryRequest$queryRunId,
|
||||||
api_key = api_key,
|
api_key = api_key,
|
||||||
@ -46,10 +102,21 @@ auto_paginate_query <- function(query, api_key, page_size = 1000,
|
|||||||
return(df)
|
return(df)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
# drop empty pages if they accidentally appear
|
||||||
res <- res[unlist(lapply(res, nrow)) > 0]
|
res <- res[unlist(lapply(res, nrow)) > 0]
|
||||||
|
|
||||||
df <- do.call(rbind.data.frame, res)
|
df <- do.call(rbind.data.frame, res)
|
||||||
|
|
||||||
return(df)
|
} else {
|
||||||
|
temp_page <- get_query_from_token(qtoken$result$queryRequest$queryRunId,
|
||||||
|
api_key = api_key,
|
||||||
|
page_number = 1,
|
||||||
|
page_size = result_num_rows,
|
||||||
|
result_format = "csv",
|
||||||
|
api_url = api_url)
|
||||||
|
|
||||||
|
|
||||||
|
df <- clean_query(temp_page)
|
||||||
|
}
|
||||||
|
|
||||||
|
return(df)
|
||||||
}
|
}
|
||||||
|
|||||||
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
#' Clean Query
|
#' Clean Query
|
||||||
#'
|
#'
|
||||||
#' @description converts query response to data frame while attempting to coerce classes
|
#' @description Converts query response to data frame while attempting to coerce classes
|
||||||
#' intelligently.
|
#' intelligently.
|
||||||
#'
|
#'
|
||||||
#' @param request The request output from get_query_from_token()
|
#' @param request The request output from get_query_from_token()
|
||||||
@ -22,8 +22,7 @@
|
|||||||
#' \dontrun{
|
#' \dontrun{
|
||||||
#' query <- create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 1000", api_key)
|
#' query <- create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 1000", api_key)
|
||||||
#' request <- get_query_from_token(query$result$queryRequest$queryRunId, api_key)
|
#' request <- get_query_from_token(query$result$queryRequest$queryRunId, api_key)
|
||||||
#' df1 <- clean_query(request, try_simplify = TRUE) # warning b/c of tx_json
|
#' df1 <- clean_query(request, try_simplify = TRUE)
|
||||||
#' df2 <- clean_query(request, try_simplify = FALSE) # silently returns columns of lists
|
|
||||||
#' }
|
#' }
|
||||||
clean_query <- function(request, try_simplify = TRUE){
|
clean_query <- function(request, try_simplify = TRUE){
|
||||||
|
|
||||||
|
|||||||
@ -4,13 +4,15 @@ library(httr)
|
|||||||
#' Create Query Token
|
#' Create Query Token
|
||||||
#'
|
#'
|
||||||
#' Uses Flipside ShroomDK to create a Query Token to access Flipside Crypto
|
#' Uses Flipside ShroomDK to create a Query Token to access Flipside Crypto
|
||||||
#' data. The query token is cached up to ttl minutes
|
#' data. The query token is kept `ttl` hours and available for no-additional cost reads up to `mam` minutes (i.e., cached to the same exact result).
|
||||||
#' allowing for pagination and multiple requests before expending more daily request uses.
|
#' allowing for pagination and multiple requests before expending more daily request uses.
|
||||||
#'
|
#'
|
||||||
#' @param query Flipside Crypto Snowflake SQL compatible query as a string.
|
#' @param query Flipside Crypto Snowflake SQL compatible query as a string.
|
||||||
#' @param api_key Flipside Crypto ShroomDK API Key
|
#' @param api_key Flipside Crypto ShroomDK API Key
|
||||||
#' @param ttl time-to-live (in hours) to keep query results available. Default 1 hour.
|
#' @param ttl time-to-live (in hours) to keep query results available. Default 1 hour.
|
||||||
#' @param mam max-age-minutes, lifespan of cache. set to 0 to always re-execute. Default 10 minutes.
|
#' @param mam max-age-minutes, lifespan of cache. set to 0 to always re-execute. Default 10 minutes.
|
||||||
|
#' @param data_source Where data is sourced, including specific computation warehouse. Default "snowflake-default". Non default data sources may require registration of api_key to allowlist.
|
||||||
|
#' @param data_provider Who provides data, Default "flipside". Non default data providers may require registration of api_key to allowlist.
|
||||||
#' @param api_url default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.
|
#' @param api_url default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.
|
||||||
#' @return list of `token` and `cached` use `token` in `get_query_from_token()`
|
#' @return list of `token` and `cached` use `token` in `get_query_from_token()`
|
||||||
#' @import jsonlite httr
|
#' @import jsonlite httr
|
||||||
@ -19,14 +21,17 @@ library(httr)
|
|||||||
#' @examples
|
#' @examples
|
||||||
#' \dontrun{
|
#' \dontrun{
|
||||||
#' create_query_token(
|
#' create_query_token(
|
||||||
#' query = "SELECT * FROM ethereum.core.fact_transactions LIMIT 1",
|
#' query = "SELECT * FROM ethereum.core.fact_transactions LIMIT 33",
|
||||||
#' api_key = readLines("api_key.txt"),
|
#' api_key = readLines("api_key.txt"),
|
||||||
#' ttl = 1,
|
#' ttl = 1,
|
||||||
#' mam = 5)
|
#' mam = 5
|
||||||
|
#' )
|
||||||
#'}
|
#'}
|
||||||
create_query_token <- function(query, api_key,
|
create_query_token <- function(query, api_key,
|
||||||
ttl = 1,
|
ttl = 1,
|
||||||
mam = 10,
|
mam = 10,
|
||||||
|
data_source = "snowflake-default",
|
||||||
|
data_provider = "flipside",
|
||||||
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"){
|
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"){
|
||||||
|
|
||||||
headers = c(
|
headers = c(
|
||||||
@ -58,11 +63,11 @@ create_query_token <- function(query, api_key,
|
|||||||
"sql" = query,
|
"sql" = query,
|
||||||
"tags" = list(
|
"tags" = list(
|
||||||
"sdk_package" = "R",
|
"sdk_package" = "R",
|
||||||
"sdk_version" = "0.2.0",
|
"sdk_version" = "0.3.0",
|
||||||
"sdk_language" = "R"
|
"sdk_language" = "R"
|
||||||
),
|
),
|
||||||
"dataSource" = "snowflake-default",
|
"dataSource" = data_source,
|
||||||
"dataProvider" = "flipside"
|
"dataProvider" = data_provider
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
"id" = 1
|
"id" = 1
|
||||||
|
|||||||
@ -4,7 +4,8 @@ library(httr)
|
|||||||
#' Get Query From Token
|
#' Get Query From Token
|
||||||
#'
|
#'
|
||||||
#' Uses Flipside ShroomDK to access a Query Token (Run ID). This function is for pagination and multiple requests.
|
#' Uses Flipside ShroomDK to access a Query Token (Run ID). This function is for pagination and multiple requests.
|
||||||
#' Note: To reduce payload it returns a list of outputs (separating column names from rows). Use `clean_query()` to
|
#' It is best suited for debugging and testing new queries. Consider `auto_paginate_query()` for queries already known to work as expected.
|
||||||
|
#' Note: To reduce payload it returns a list of outputs (separating column names from rows). See `clean_query()` for converting result to a data frame.
|
||||||
#'
|
#'
|
||||||
#' @param query_run_id queryRunId from `create_query_token()`, for token stored as `x`, use `x$result$queryRequest$queryRunId`
|
#' @param query_run_id queryRunId from `create_query_token()`, for token stored as `x`, use `x$result$queryRequest$queryRunId`
|
||||||
#' @param api_key Flipside Crypto ShroomDK API Key
|
#' @param api_key Flipside Crypto ShroomDK API Key
|
||||||
@ -29,31 +30,48 @@ get_query_from_token <- function(query_run_id, api_key,
|
|||||||
result_format = "csv",
|
result_format = "csv",
|
||||||
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"){
|
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"){
|
||||||
|
|
||||||
|
status_check_done <- FALSE
|
||||||
|
warn_flag <- FALSE
|
||||||
|
|
||||||
query_status <- get_query_status(query_run_id = query_run_id, api_key = api_key, api_url = api_url)
|
while (!status_check_done) {
|
||||||
query_state <- query_status$result$queryRun$state
|
query_status <- get_query_status(query_run_id = query_run_id, api_key = api_key, api_url = api_url)
|
||||||
|
query_state <- query_status$result$queryRun$state
|
||||||
|
failed_to_get_a_state = 0
|
||||||
|
|
||||||
# implicit else for "QUERY_STATUS_SUCCESS"
|
if(failed_to_get_a_state > 2){
|
||||||
if(query_state == "QUERY_STATE_FAILED"){
|
warning("Query has failed state more than twice, consider cancel_query(), exiting now")
|
||||||
stop(query_status$result$queryRun$errorMessage)
|
stop("Exited due to 3+ Failed States")
|
||||||
} else if(query_state == "QUERY_STATE_CANCELED"){
|
}
|
||||||
stop("This query was canceled, typically by cancel_query()")
|
|
||||||
} else if(query_state != "QUERY_STATE_SUCCESS"){
|
if(length(query_state) == 0){
|
||||||
warning("Query in process, checking again in 5 seconds")
|
warning("Query failed to return a state, trying again")
|
||||||
Sys.sleep(5)
|
Sys.sleep(5)
|
||||||
# run it back
|
failed_to_get_a_state = failed_to_get_a_state + 1
|
||||||
return(
|
} else {
|
||||||
get_query_from_token(query_run_id = query_run_id,
|
if(query_state == "QUERY_STATE_SUCCESS"){
|
||||||
api_key = api_key,
|
status_check_done <- TRUE
|
||||||
page_number = page_number,
|
next()
|
||||||
page_size = page_size,
|
} else if(query_state == "QUERY_STATE_FAILED"){
|
||||||
result_format = result_format,
|
status_check_done <- TRUE
|
||||||
api_url = api_url
|
stop(query_status$result$queryRun$errorMessage)
|
||||||
|
} else if(query_state == "QUERY_STATE_CANCELED"){
|
||||||
|
status_check_done <- TRUE
|
||||||
|
stop("This query was canceled, typically by cancel_query()")
|
||||||
|
} else if(query_state != "QUERY_STATE_SUCCESS"){
|
||||||
|
warning(
|
||||||
|
paste0("Query in process, checking again in 10 seconds.",
|
||||||
|
"To cancel use: cancel_query() with your ID: \n", query_run_id)
|
||||||
|
)
|
||||||
|
Sys.sleep(5)
|
||||||
|
} else if(query_state != "QUERY_STATE_SUCCESS"){
|
||||||
|
warning(
|
||||||
|
paste0("Query in process, checking again in 10 seconds.",
|
||||||
|
"To cancel use: cancel_query() with your ID: \n", query_run_id)
|
||||||
)
|
)
|
||||||
)
|
Sys.sleep(10)
|
||||||
} else {
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
headers = c(
|
headers = c(
|
||||||
"Content-Type" = 'application/json',
|
"Content-Type" = 'application/json',
|
||||||
|
|||||||
@ -1,98 +1,168 @@
|
|||||||
# shroomDK
|
# shroomDK
|
||||||
|
|
||||||
ShroomDK is an R package for simplifying access to the Flipside Crypto ShroomDK REST API. More details available at [sdk.flipsidecrypto.xyz/shroomdk](https://sdk.flipsidecrypto.xyz/shroomdk)
|
ShroomDK is an R package for simplifying access to the Flipside Crypto Compass RPC API. More details available at [docs.flipsidecrypto.com/api-sdk-developers](https://docs.flipsidecrypto.com/api-sdk-developers/).
|
||||||
|
|
||||||
## How to get your own ShroomDK API Key
|
## How to get your own ShroomDK API Key
|
||||||
|
|
||||||
ShroomDK API Keys are NFTs on the Ethereum blockchain. They are free to mint (not counting Ethereum gas) and new mints are available each day. Alternatively you can buy the NFT on any NFT Marketplace where listed (e.g., OpenSea).
|
ShroomDK API Keys were originally NFTs on the Ethereum blockchain. They are now standard API keys available in your [flipsidecrypto user profile](https://flipsidecrypto.xyz/settings/api-keys). Every user gets 500 query seconds as a free Community tier. Additional query seconds can be purchased via the Builder or Pro tier. Enterprises seeking [Snowflake Data Shares](https://data.flipsidecrypto.com/) or scaled pricing can purchase reach out via email to `data-shares@flipsidecrypto.com`.
|
||||||
|
|
||||||
## How to Install
|
The [Data Studio](https://flipsidecrypto.xyz/) remains free for analysts analyzing data ad-hoc, creating dashboards, and testing queries. It is recommended you test queries in the studio prior to using them to pull data via the API.
|
||||||
|
|
||||||
|
## Install from CRAN
|
||||||
|
|
||||||
|
Current Version: 0.3.0
|
||||||
|
|
||||||
|
```
|
||||||
|
install.packages("shroomDK")
|
||||||
|
library(shroomDK)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## How to Install Latest from Github
|
||||||
|
|
||||||
|
```
|
||||||
library(devtools) # install if you haven't already
|
library(devtools) # install if you haven't already
|
||||||
devtools::install_github(repo = 'FlipsideCrypto/sdk', subdir = 'r/shroomDK')
|
devtools::install_github(repo = 'FlipsideCrypto/sdk', subdir = 'r/shroomDK')
|
||||||
library(shroomDK)
|
library(shroomDK)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 3 Main Functions
|
## 1 Main Wrapper Function
|
||||||
|
|
||||||
|
Intelligently grab up to 1 Gigabyte of data from a SQL query including automatic pagination and cleaning.
|
||||||
|
|
||||||
|
### auto_paginate_query()
|
||||||
|
|
||||||
|
Documentation can be viewed within RStudio with `?auto_paginate_query` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
||||||
|
|
||||||
|
| Parameter | Description |
|
||||||
|
|-----------------------------|------------------------------------------|
|
||||||
|
| query | The SQL query to pass to ShroomDK |
|
||||||
|
| api_key | Your ShroomDK API key |
|
||||||
|
| page_size | Default 25,000. May return error if page_size is too large (specifically if data exceeds 30MB or entire query \>1GB). Ignored if results fit on 1 page of \< 15 Mb of data |
|
||||||
|
| page_count | How many pages, of page_size rows each, to read. Default NULL calculates the ceiling (\# rows in results / page_size). Ignored if results fit on 1 page of \< 15 Mb of data |
|
||||||
|
| data_source | Where data is sourced, including specific computation warehouse. Default `"snowflake-default"`. Non-default data sources may require registration of api_key to allowlist |
|
||||||
|
| data_provider | Who provides data, Default `"flipside"`. Non-default data providers may require registration of api_key to allowlist |
|
||||||
|
| api_url | Default to `https://api-v2.flipsidecrypto.xyz/json-rpc` but upgradeable for user |
|
||||||
|
|
||||||
|
Returns a data frame of up to `page_size * page_count` rows, see `?clean_query` for more details on column classes.
|
||||||
|
|
||||||
|
```
|
||||||
|
api_key = readLines("api_key.txt") # always gitignore your API keys!
|
||||||
|
pull_data <- auto_paginate_query("
|
||||||
|
SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10001",
|
||||||
|
api_key = api_key,
|
||||||
|
page_size = 9000, # ends up ignored because results fit on 1 page!
|
||||||
|
page_count = NULL) # NULL automatically calculates required number of pages
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5 Component Function
|
||||||
|
|
||||||
### create_query_token()
|
### create_query_token()
|
||||||
|
|
||||||
Documentation can be viewed within RStudio with ```?create_query_token``` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
Uses Flipside ShroomDK to create a Query Token to access Flipside Crypto data. The query token is kept `ttl` hours and available for no-additional cost reads up to `mam` minutes (i.e., cached to the same exact result). allowing for pagination and multiple requests before expending more daily request uses.
|
||||||
|
|
||||||
| Item | Definition |
|
Documentation can be viewed within RStudio with `?create_query_token` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
||||||
| ----------- | ----------- |
|
|
||||||
| Description | Uses Flipside ShroomDK to create a Query Token to access Flipside Crypto data. The query token is cached up to ttl minutes allowing for pagination and multiple requests before expending more daily request uses.|
|
|
||||||
| Usage | create_query_token(query, api_key, ttl = 10, cache = TRUE)|
|
|
||||||
| query | Flipside Crypto Snowflake SQL compatible query as a string. |
|
|
||||||
| api_key | Flipside Crypto ShroomDK API Key |
|
|
||||||
| ttl | time (in minutes) to keep query in cache. |
|
|
||||||
| cache | Use cached results; set as FALSE to re-execute. |
|
|
||||||
| Value | list of `token` and `cached` use `token` in `get_query_from_token()`|
|
|
||||||
|
|
||||||
```
|
| Parameter | Description |
|
||||||
|
|-----------------------------|------------------------------------------|
|
||||||
|
| query | Flipside Crypto Snowflake SQL compatible query as a string |
|
||||||
|
| api_key | Flipside Crypto ShroomDK API Key |
|
||||||
|
| ttl | Time-to-live (in hours) to keep query results available. Default 1 hour |
|
||||||
|
| mam | Max-age-minutes, lifespan of cache. Set to 0 to always re-execute. Default 10 minutes |
|
||||||
|
| data_source | Where data is sourced, including specific computation warehouse. Default `"snowflake-default"`. Non-default data sources may require registration of api_key to allowlist |
|
||||||
|
| data_provider | Who provides data, Default `"flipside"`. Non-default data providers may require registration of api_key to allowlist |
|
||||||
|
| api_url | Default to <https://api-v2.flipsidecrypto.xyz/json-rpc> but upgradeable for user |
|
||||||
|
|
||||||
|
Returns a list of `token` and `cached`. Use `token` in `get_query_from_token()`\|
|
||||||
|
|
||||||
|
```
|
||||||
# example
|
# example
|
||||||
|
api_key = readLines("api_key.txt") # always gitignore your API keys!
|
||||||
create_query_token(
|
create_query_token(
|
||||||
query = "SELECT * FROM ethereum.core.fact_transactions LIMIT 1",
|
query = "SELECT * FROM ethereum.core.fact_transactions LIMIT 1",
|
||||||
api_key = readLines("api_key.txt"), # gitignore your api_key! don't share!
|
api_key = api_key,
|
||||||
ttl = 15,
|
ttl = 1,
|
||||||
cache = TRUE)
|
mam = 5)
|
||||||
|
```
|
||||||
|
|
||||||
|
### get_query_status()
|
||||||
|
|
||||||
|
Access the status of a query run id from `create_query_token()`.
|
||||||
|
|
||||||
|
Documentation can be viewed within RStudio with `?get_query_status` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
||||||
|
|
||||||
|
| Parameter | Description |
|
||||||
|
|-----------------------------|------------------------------------------|
|
||||||
|
| query_run_id | queryRunId from `create_query_token()`, for token stored as `x`, use `x$result$queryRequest$queryRunId` |
|
||||||
|
| api_key | Flipside Crypto ShroomDK API Key |
|
||||||
|
| api_url | Default to `https://api-v2.flipsidecrypto.xyz/json-rpc` but upgradeable for user |
|
||||||
|
|
||||||
|
Returns request content; for content `x`, use `x$result$queryRun$state` and `x$result$queryRun$errorMessage`. Expect one of `QUERY_STATE_READY`, `QUERY_STATE_RUNNING`, `QUERY_STATE_STREAMING_RESULTS`, `QUERY_STATE_SUCCESS`, `QUERY_STATE_FAILED`, `QUERY_STATE_CANCELED`.
|
||||||
|
|
||||||
|
```
|
||||||
|
api_key = readLines("api_key.txt") # always gitignore your API keys!
|
||||||
|
query = create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10000", api_key)
|
||||||
|
get_query_status(query$result$queryRequest$queryRunId, api_key)
|
||||||
```
|
```
|
||||||
|
|
||||||
### get_query_from_token()
|
### get_query_from_token()
|
||||||
|
|
||||||
|
Access results of a Query Token (Run ID). This function is for pagination and multiple requests. It is best suited for debugging and testing new queries. Consider `auto_paginate_query()` for queries already known to work as expected.
|
||||||
|
|
||||||
Documentation can be viewed within RStudio with ```?get_query_from_token``` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
Note: To reduce payload it returns a list of outputs (separating column names from rows). See `clean_query()` for converting result to a data frame.
|
||||||
|
|
||||||
| Item | Definition |
|
Documentation can be viewed within RStudio with `?get_query_from_token` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
||||||
| ----------- | ----------- |
|
|
||||||
| Description | Uses Flipside ShroomDK to access a Query Token. Query tokens are cached up to 'ttl' minutes for each 'query'. This function is for pagination and multiple requests while reducing your use of your daily rate limit. Note: To reduce payload it returns a list of outputs (separating column names from rows).|
|
|
||||||
| Usage | get_query_from_token(query_token, api_key, page_number = 1, page_size = 1e+05)|
|
|
||||||
| query_token | token from `create_query_token()` |
|
|
||||||
| api_key | Flipside Crypto ShroomDK API Key |
|
|
||||||
| page_number |Query tokens are cached and 100k rows max. Get up to 1M rows by going through pages. |
|
|
||||||
| page_size | Default 100,000. Paginate via page_number. |
|
|
||||||
| Value | returns a request of length 8: `results`, `columnLabels`, `columnTypes`, `startedAt`, `endedAt`, `pageNumber`, `pageSize`, `status` |
|
|
||||||
|
|
||||||
|
| Parameter | Description |
|
||||||
|
|--------------------------------------------|----------------------------|
|
||||||
|
| query_run_id | queryRunId from `create_query_token()`, for token stored as `x`, use `x$result$queryRequest$queryRunId` |
|
||||||
|
| api_key | Flipside Crypto ShroomDK API Key |
|
||||||
|
| page_number | Results are cached, max 30MB of data per page |
|
||||||
|
| page_size | Default 1000. Paginate via page_number. May return error if page_size causes data to exceed 30MB |
|
||||||
|
| result_format | Default to csv. Options: csv and json |
|
||||||
|
| api_url | Default to <https://api-v2.flipsidecrypto.xyz/json-rpc> but upgradeable for user |
|
||||||
|
|
||||||
```
|
Returns a list of jsonrpc, id, and result. Within result are: columnNames, columnTypes, rows, page, sql, format, originalQueryRun, redirectedToQueryRun. Use `clean_query()` to transform this into a data frame. If a query exactly matches another recently run query, the run will be redirected to the results of the earlier query run ID to reduce costs.
|
||||||
|
|
||||||
|
```
|
||||||
# example
|
# example
|
||||||
query = create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10000", api_key) #gitignore your API key!
|
api_key = readLines("api_key.txt") # always gitignore your API keys!
|
||||||
get_query_from_token(query$token, api_key, 1, 10000)
|
query <- create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 1000", api_key)
|
||||||
|
fact_transactions <- get_query_from_token(query$result$queryRequest$queryRunId, api_key, 1, 1000)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### cancel_query()
|
||||||
|
|
||||||
|
CANCEL a query run id from `create_query_token()`. As the new API uses warehouse-seconds to charge users above the free tier, the ability to cancel is critical for cost management.
|
||||||
|
|
||||||
|
Documentation can be viewed within RStudio with `?cancel_query` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
||||||
|
|
||||||
|
| Parameter | Description |
|
||||||
|
|-----------------------------------------|-------------------------------|
|
||||||
|
| query_run_id | queryRunId from `create_query_token()`, for token stored as `x`, use `x$result$queryRequest$queryRunId` |
|
||||||
|
| api_key | Flipside Crypto ShroomDK API Key |
|
||||||
|
| api_url | Default to <https://api-v2.flipsidecrypto.xyz/json-rpc> but upgradeable for user |
|
||||||
|
|
||||||
|
Returns a list of the status_canceled (TRUE or FALSE) and the cancel object (which includes related details)
|
||||||
|
|
||||||
### clean_query()
|
### clean_query()
|
||||||
|
|
||||||
Documentation can be viewed within RStudio with ```?clean_query``` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
Converts query response to data frame while attempting to coerce classes intelligently.
|
||||||
|
|
||||||
| Item | Definition |
|
Documentation can be viewed within RStudio with `?clean_query` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
||||||
| ----------- | ----------- |
|
|
||||||
| Description | Cleans Query to be in Data Frame format |
|
| Parameter | Description |
|
||||||
| Usage | clean_query(request, try_simplify = TRUE)|
|
|---------------------------------------|---------------------------------|
|
||||||
| `request` | The request output from `get_query_from_token()` |
|
| request | The request output from `get_query_from_token()` |
|
||||||
|try_simplify | because requests can return JSON and/or may not have the same length across values, they may not be data frame compliant (all columns having the same number of rows). A key example would be TX_JSON in EVM FACT_TRANSACTION tables which include 50+ extra details from transaction logs. But other examples like `NULL` values in TO_ADDRESS can have similar issues. Default `TRUE`. |
|
| try_simplify | Because requests can return JSON and may not have the same length across values, they may not be data frame compliant (all columns having the same number of rows). A key example would be TX_JSON in EVM FACT_TRANSACTION tables which include 50+ extra details from transaction logs. But other examples like NULLs in TO_ADDRESS can have similar issues. Default TRUE |
|
||||||
| Value | Always returns a data frame. If 'try_simplify' is `FALSE` OR if `try_simplify = TRUE` fails (columns having different number of rows) then the data frame is comprised of lists, where each column must be coerced to a desired class (e.g., with `as.numeric()`) to ensure each column has the same number of rows.|
|
|
||||||
|
Returns a data frame. If `try_simplify` is FALSE OR if `try_simplify` TRUE fails: the data frame is comprised of lists, where each column must be coerced to a desired class (e.g., with `as.numeric()`).
|
||||||
|
|
||||||
Note: The vast majority (95%+) of queries will return a simple data frame with the classes coerced intelligently (e.g., Block_Number being numeric). But check the warnings and check your column classes, if the class is a list then try_simplify failed (i.e., not all columns have the same number of rows when coerced).
|
Note: The vast majority (95%+) of queries will return a simple data frame with the classes coerced intelligently (e.g., Block_Number being numeric). But check the warnings and check your column classes, if the class is a list then try_simplify failed (i.e., not all columns have the same number of rows when coerced).
|
||||||
|
|
||||||
```
|
```
|
||||||
#example
|
#example
|
||||||
query = create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10000", api_key)
|
api_key = readLines("api_key.txt") # always gitignore your API keys!
|
||||||
request = get_query_from_token(query$token, api_key, 1, 10000)
|
query <- create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 1000", api_key)
|
||||||
clean_query(request, try_simplify = FALSE) # returns data frame of lists()
|
request <- get_query_from_token(query$result$queryRequest$queryRunId, api_key)
|
||||||
|
df <- clean_query(request, try_simplify = TRUE)
|
||||||
```
|
```
|
||||||
|
|
||||||
## 1 Support Function
|
|
||||||
|
|
||||||
### auto_paginate_query()
|
|
||||||
|
|
||||||
Documentation can be viewed within RStudio with ```?auto_paginate_query``` for new packages you may need to restart R to get to the documentation. It is summarized here:
|
|
||||||
|
|
||||||
| Item | Definition |
|
|
||||||
| ----------- | ----------- |
|
|
||||||
| Description | Grabs up to `maxrows` in a query by going through each page 100k rows at a time. |
|
|
||||||
| Usage | auto_paginate_query(query, api_key)|
|
|
||||||
| query | Flipside Crypto Snowflake SQL compatible query as a string. |
|
|
||||||
| api_key | Flipside Crypto ShroomDK API Key |
|
|
||||||
| maxrows | Flipside Crypto ShroomDK maximum rows in query, default 1,000,000 |
|
|
||||||
| value | data frame of up to 1M rows, see `?clean_query` for more details on column classes |
|
|
||||||
|
|||||||
@ -7,8 +7,10 @@
|
|||||||
auto_paginate_query(
|
auto_paginate_query(
|
||||||
query,
|
query,
|
||||||
api_key,
|
api_key,
|
||||||
page_size = 1000,
|
page_size = 25000,
|
||||||
page_count = 1,
|
page_count = NULL,
|
||||||
|
data_source = "snowflake-default",
|
||||||
|
data_provider = "flipside",
|
||||||
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"
|
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@ -17,9 +19,13 @@ auto_paginate_query(
|
|||||||
|
|
||||||
\item{api_key}{ShroomDK API key.}
|
\item{api_key}{ShroomDK API key.}
|
||||||
|
|
||||||
\item{page_size}{Default 1000. May return error if page_size is tool large and data to exceed 30MB.}
|
\item{page_size}{Default 25,000. May return error if `page_size` is too large (if page exceeds 30MB or entire query >1GB). Ignored if results fit on 1 page of < 15 Mb of data.}
|
||||||
|
|
||||||
\item{page_count}{Default 1. How many pages, of page_size rows each, to read.}
|
\item{page_count}{How many pages, of page_size rows each, to read. Default NULL calculates the ceiling (# rows in results / page_size). Ignored if results fit on 1 page of < 15 Mb of data.}
|
||||||
|
|
||||||
|
\item{data_source}{Where data is sourced, including specific computation warehouse. Default `"snowflake-default"`. Non default data sources may require registration of api_key to allowlist.}
|
||||||
|
|
||||||
|
\item{data_provider}{Who provides data, Default `"flipside"`. Non default data providers may require registration of api_key to allowlist.}
|
||||||
|
|
||||||
\item{api_url}{default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.}
|
\item{api_url}{default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.}
|
||||||
}
|
}
|
||||||
@ -27,13 +33,14 @@ auto_paginate_query(
|
|||||||
data frame of up to `page_size * page_count` rows, see ?clean_query for more details on column classes.
|
data frame of up to `page_size * page_count` rows, see ?clean_query for more details on column classes.
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Grabs up to maxrows in a query by going through each page to download one at a time.
|
Intelligently grab up to 1 Gigabyte of data from a SQL query including automatic pagination and cleaning.
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
\dontrun{
|
\dontrun{
|
||||||
pull_data <- auto_paginate_query("
|
pull_data <- auto_paginate_query("
|
||||||
SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10001",
|
SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 10001",
|
||||||
api_key = readLines("api_key.txt"),
|
api_key = readLines("api_key.txt"),
|
||||||
page_count = 10)
|
page_size = 9000, # ends up ignored because results fit on 1 page.
|
||||||
|
page_count = NULL)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -21,14 +21,13 @@ the data frame is comprised of lists, where each column must be coerced
|
|||||||
to a desired class (e.g., with `as.numeric()`).
|
to a desired class (e.g., with `as.numeric()`).
|
||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
converts query response to data frame while attempting to coerce classes
|
Converts query response to data frame while attempting to coerce classes
|
||||||
intelligently.
|
intelligently.
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
\dontrun{
|
\dontrun{
|
||||||
query <- create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 1000", api_key)
|
query <- create_query_token("SELECT * FROM ETHEREUM.CORE.FACT_TRANSACTIONS LIMIT 1000", api_key)
|
||||||
request <- get_query_from_token(query$result$queryRequest$queryRunId, api_key)
|
request <- get_query_from_token(query$result$queryRequest$queryRunId, api_key)
|
||||||
df1 <- clean_query(request, try_simplify = TRUE) # warning b/c of tx_json
|
df1 <- clean_query(request, try_simplify = TRUE)
|
||||||
df2 <- clean_query(request, try_simplify = FALSE) # silently returns columns of lists
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -9,6 +9,8 @@ create_query_token(
|
|||||||
api_key,
|
api_key,
|
||||||
ttl = 1,
|
ttl = 1,
|
||||||
mam = 10,
|
mam = 10,
|
||||||
|
data_source = "snowflake-default",
|
||||||
|
data_provider = "flipside",
|
||||||
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"
|
api_url = "https://api-v2.flipsidecrypto.xyz/json-rpc"
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@ -21,6 +23,10 @@ create_query_token(
|
|||||||
|
|
||||||
\item{mam}{max-age-minutes, lifespan of cache. set to 0 to always re-execute. Default 10 minutes.}
|
\item{mam}{max-age-minutes, lifespan of cache. set to 0 to always re-execute. Default 10 minutes.}
|
||||||
|
|
||||||
|
\item{data_source}{Where data is sourced, including specific computation warehouse. Default "snowflake-default". Non default data sources may require registration of api_key to allowlist.}
|
||||||
|
|
||||||
|
\item{data_provider}{Who provides data, Default "flipside". Non default data providers may require registration of api_key to allowlist.}
|
||||||
|
|
||||||
\item{api_url}{default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.}
|
\item{api_url}{default to https://api-v2.flipsidecrypto.xyz/json-rpc but upgradeable for user.}
|
||||||
}
|
}
|
||||||
\value{
|
\value{
|
||||||
@ -28,15 +34,16 @@ list of `token` and `cached` use `token` in `get_query_from_token()`
|
|||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Uses Flipside ShroomDK to create a Query Token to access Flipside Crypto
|
Uses Flipside ShroomDK to create a Query Token to access Flipside Crypto
|
||||||
data. The query token is cached up to ttl minutes
|
data. The query token is kept `ttl` hours and available for no-additional cost reads up to `mam` minutes (i.e., cached to the same exact result).
|
||||||
allowing for pagination and multiple requests before expending more daily request uses.
|
allowing for pagination and multiple requests before expending more daily request uses.
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
\dontrun{
|
\dontrun{
|
||||||
create_query_token(
|
create_query_token(
|
||||||
query = "SELECT * FROM ethereum.core.fact_transactions LIMIT 1",
|
query = "SELECT * FROM ethereum.core.fact_transactions LIMIT 33",
|
||||||
api_key = readLines("api_key.txt"),
|
api_key = readLines("api_key.txt"),
|
||||||
ttl = 1,
|
ttl = 1,
|
||||||
mam = 5)
|
mam = 5
|
||||||
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@ -33,7 +33,8 @@ use `clean_query()` to transform this into a data frame.
|
|||||||
}
|
}
|
||||||
\description{
|
\description{
|
||||||
Uses Flipside ShroomDK to access a Query Token (Run ID). This function is for pagination and multiple requests.
|
Uses Flipside ShroomDK to access a Query Token (Run ID). This function is for pagination and multiple requests.
|
||||||
Note: To reduce payload it returns a list of outputs (separating column names from rows). Use `clean_query()` to
|
It is best suited for debugging and testing new queries. Consider `auto_paginate_query()` for queries already known to work as expected.
|
||||||
|
Note: To reduce payload it returns a list of outputs (separating column names from rows). See `clean_query()` for converting result to a data frame.
|
||||||
}
|
}
|
||||||
\examples{
|
\examples{
|
||||||
\dontrun{
|
\dontrun{
|
||||||
|
|||||||
@ -18,3 +18,4 @@ StripTrailingWhitespace: Yes
|
|||||||
BuildType: Package
|
BuildType: Package
|
||||||
PackageUseDevtools: Yes
|
PackageUseDevtools: Yes
|
||||||
PackageInstallArgs: --no-multiarch --with-keep.source
|
PackageInstallArgs: --no-multiarch --with-keep.source
|
||||||
|
PackageCheckArgs: --as-cran --no-manual
|
||||||
|
|||||||
BIN
r/shroomDK_0.2.1.tar.gz
Normal file
BIN
r/shroomDK_0.2.1.tar.gz
Normal file
Binary file not shown.
BIN
r/shroomDK_0.3.0.tar.gz
Normal file
BIN
r/shroomDK_0.3.0.tar.gz
Normal file
Binary file not shown.
Loading…
Reference in New Issue
Block a user