Building a Binance Reserves Tracking Bot. (Part 1)

Mykhailo Kushnir
Level Up Coding
Published in
5 min readMay 29, 2022

--

My passion for data helped me develop several instruments for capturing those essential bits of information. Unfortunately, hosting those tools locally and running on schedule became tedious, so I’ve been looking for other options. Here’s what I’ve found.

Photo by Behnam Norouzi on Unsplash

Use Case: For my trading bot, I’ve needed to capture hourly grouped information regarding reserves on centralized exchanges like Binance. While it can be taken from various on-chain data explorers like IntoTheBlock or CryptoQuant, I wanted to have a low-cost solution here for validating the idea.

API to use: https://api.etherscan.io/api

They offer a free tier of a subscription with some sensible rate limiting. My goal was to call the endpoint through Python and store the data in DB. All of that needs to be done on schedule every hour. To get more visibility about the status of the job, I’ve attached some Telegram notifications to it.

I have just released a new course on Udemy called “Practical Web Scraping Course”. It contains video tutorials packed with coding tips for web scraping that would help you build your data-based apps in no time. You can find code and ideas from this article and my other tutorials there in convenient visual format.

Create a spider

This part of the solution starts with scraping the list of all the accounts with name tags relevant to Binance. For the proof-of-concept type of solution, I’ve executed the following JavaScript in the browser and copy-pasted the result:

arr = Array.from(document.querySelectorAll(‘tr td a’))arr.map((a) => a.innerText )

This code would be enough for the POC as these accounts don’t change often.

To capture the data from API lists, here’s what you need to have:

I’m taking the list of accounts from the Python method and executing a request to API for each of them with a pause of 1 second. That was done to prevent Etherscan from blocking the script by rate limiting. I’m also converting a returned number from gwei to USD.

In the sections below you’ll see the implementation of the send_telegram_notification and insert_records methods.

Add notifications

Having notifications allows you to glimpse into the status of the application during your day-to-day routine. There are multiple candidates as receiver applications, but I’ve selected Telegram as something I use most often. This is a tutorial about web scraping so I would just share a link to StackOverflow, where you can find all the particulars of sending notifications to TG through Python. Also, here’s the code I’m using to send a meaningful notification to myself:

Test it manually

My goal is to only prove that it works before I’ll make the solution much more complicated and deployable. To verify its readiness, just start it as a Python module:

python -m main

The expected output would be:

The expected output in the terminal
The expected output in the terminal

Of course, the ETH value would change :)

Add a connection to DB

Now let us scale the solution a bit. Storing output in files has multiple drawbacks like the inability to access data outside of your computer and a risk to lose it due to some hardware or software malfunction. Let us add a persistence layer to the application by connecting a PostgreSQL database. From my previous tutorials, you know that I often pick Heroku as a hosting for my POC applications and this time would not be different.

To add a new instance of PostgreSQL to your dynos run the following command in Heroku CLI:

heroku addons:create heroku-postgresql:hobby-dev

Heroku encourages app creators to use environment variables to hide sensitive information like API keys or connection strings. To get a connection to your newly created DB you’d have to read a variable called DATABASE_URL:

For Python — PostgreSQL communication I’ve used a library called psycopg2. It exposes the familiar interface of writing into DB through cursors. Before inserting a new record you’d have to create a table with all the needed fields:

And here’s the method that would help you insert the new records into DB:

Deploy to Heroku

Pushing your code wouldn’t be a tricky task with Heroku CLI.

First, set up a heroku.yml with the following code:

web: python -m main

And then push the code to

git push heroku HEAD:master

This is not my first tutorial with Heroku. The one I’ve done before was about how to create a Software as a Service within 24 hours. Give it a chance if you are interested in the subject.

Test on Heroku

If everything goes correct, run the following command to trigger the deployed instance from your own PC:

heroku run web

Schedule execution

As I’ve mentioned at the beginning of this article, I want my script running every hour so I had to use some scheduling solution. Because my current version of the script is quite fast I’ve decided to use Heroku Scheduler as it’s free and allows the configuration that I need. Add it to your app just like this:

heroku addons:create scheduler:standard

The configuration looks like this:

Conclusion

The data retrieval process is heavily bounded by automation. Your scraper would be useless if you would have to run it over and over again manually, as humans lack consistency and data has a vital need in it. Having a scheduled job to trigger your script is a “must-have” if you expect it to actually work for your own benefit.

In the next part, I’ll show you how to visualise the results and share them with others. Follow this blog if you don’t want to miss the next update:

--

--