In previous posts we looked at creating market orders and limit orders with {binance}. We saw a couple of successful trades. However, sometimes trades are not successful and the orders are not filled. Let’s try to understand why.
Functionality for working with spot trades is now available in {binance}. In this post we’ll establish some background on spot trading and then explore some related functions.
Dust refers to the fragments of coins which are too small to use for transactions. In the fiat world the equivalent would be those worthless coins with too little value to actually buy anything, that take up space in your wallet and end up scattered across parking areas.
Binance allows you to convert dust into BNB. In this post I discuss the functions in {binance} which support this operation.
I started dabbling in Crypto trading on Binance at the beginning of September 2021. I am really impressed with the interface, which is smooth and full featured (if perhaps a little complicated and confusing!). One of the things that has frustrated me though is not being able to get an idea of whether I’m making progress. There’s no view which shows me the overall status of my account and how this has evolved over time.
Fathom Data has been doing a lot of work with the HCRIS (Healthcare Cost Report Information System) data. The underlying reports are submitted as a spreadsheet with multiple sheets. The data are then extracted and recorded in a simple tabular format, with each field linked to a worksheet code (wksht_cd), column number (clmn_num) and line number (clmn_num). These three keys are then mapped to a single compound key. The resulting data look something like this:
Being able to view related messages as threads is really useful. To make this possible, messages must use either the In-Reply-To or References header field to link to the Message-ID from another message.
The {emayili} package supports configuring a generic SMTP server via the server() function. In the most recent version, v0.6.5, we add three new functions, gmail(), sendgrid() and mailgun(), which provide specific support for Gmail, SendGrid and Mailgun.
Sometimes you need to have a message delivered immediately. Other times it doesn’t matter when it’s delivered. Similarly, you might want the recipient to read a message immediately. Or you may not really care when they read it. The ability to specify message priority and importance in {emayili} has been added to address both scenarios.
library(emayili)packageVersion("emayili")
[1] '0.6.1'
Importance
The Importance header specifies how important a message is (surprise!). It reflects how important the sender thinks the message is, which might not necessarily agree with the recipient’s opinion. According to RFC 4021 this (optional) field can assume one of three values: low, normal or high.
How can you be sure that the contents of an email haven’t been tampered with? The best approach would probably be to have a digital signature on each component of the message. Perhaps I’ll look at integrating that into {emayili} some time in the future. However, today I’m writing about the first step in that direction: MD5 checksums.
The concept of “wide data” is relative. In some domains 100 columns is considered “wide”, while in others that’s perfectly normal and you’d need to have thousands (or tens of thousands!) of columns for it to be considered even remotely “wide”. The data that we work with at Fathom Data generally lies in the first domain, but from time to time we do work on data that is considerably wider.
A proxy server acts as an intermediary between a client and a server. When a request goes through a proxy server there is no direct connection between the client and the server. The client connects to the proxy and the proxy then connects to the server. Requests and responses pass through the proxy.
Yoav Raskin suggested that it would be useful to support right-to-left (RTL) text in {emayili}, so that languages like Hebrew, Arabic and Aramaic would render properly. I’ll be honest, this was not something that I had previously considered. But agreed, it would be a cool feature.
By default <img> tags are wrapped in a tight <p></p> embrace by {knitr}. In general this works really well. However, I want to have more control over image formatting for {emayili}.
I love the clean simplicity of an R Markdown document. But sometimes it can feel a little bare and utilitarian. This is especially the case if it’s rendered into the body of an email. How about injecting a little more pizzazz?
I don’t frequently use parameters in R Markdown documents. The initial implementation of render() in {emayili} did not cater for them. A small tweak makes it possible though.
We’ve been able to attach text and HTML content to messages with {emayili}. But something that I’ve really been wanting to do is render Markdown directly into an email.
In version 0.4.19 I’ve added the ability to directly render Plain Markdown into a message. That version is not on CRAN, so you’ll need to install from GitHub.
At Fathom Data we use Clockify to keep detailed records of the time that we spend working on our clients’ projects. Up until fairly recently we manually generated timesheets at the end of each month that were sent through to the clients along with their invoices. Our experience has been that providing detailed timesheets helps foster trust and transparency. However, with a growing team and an expanding clientele, generating these timesheets has become progressively more laborious. Time to automate!
It’s often handy to have access to a HTTP proxy. I use this recipe from time to time to quickly fling together a proxy server which I can use to relay HTTP requests from a different origin.
When writing an R package I usually create a README.Rmd file that I render to README.md. I use {pkgdown} to then create documentation. I run the last step via CI, so once it’s set up I never need to think about it again.
The problem is that I regularly forget to process the README.Rmd file, which means that despite keeping that up to date, everything else lags behind.
What if I automated the process? I created a simple pre-commit hook which processes README.Rmd whenever I make a commit and automatically adds any changes to the commit.
In a previous post I looked at how to set up Websockify behind an NGINX proxy. The ultimate goal was to accommodate multiple simultaneous users. Although the setup in that post worked, if the number of users is large then it becomes very resource hungry because there’s a Websockify instance running for each user.
A recent issue on the {emayili} GitHub repository prompted me to think a bit more about email address validation. When I started looking into this I was somewhat surprised to learn that it’s such a complicated problem. Who would have thought that something as apparently simple as an email address could be linked with such complexity?
I recently moved from suburban South Africa to rural England. I’m figuring out my new environment. Making some maps seemed to be a good way to get familiar with the surroundings.
In the process I wanted to figure out two things:
how to get maps with a consistent aspect ratio at different latitudes; and
how to overlay a partially transparent map layer.
To make things more interesting I’ll create maps of both my old and new locations.
If you’re going to be exposing noVNC on the (public) internet, then it’s vital that you take some security measures. You should install a suitable SSL certificate and serve noVNC via HTTPS rather than HTTP. Getting that all up and running can be moderately tricky. Here’s a quick recipe to get a minimal setup working.
At Fathom Data we are developing a framework which will enable remote access to Linux desktops via a browser. There’s nothing new to this idea. However, we have a very specific application in mind, so we need to roll our own solution. Importantly, there need to be multiple independent connections catering for a group of users. In this post I’ll show how we used the following tools to make this possible:
This post describes the process of building a custom AMI (Amazon Machine Image) using the AWS CLI. The goal is to automate the entire process, making it completely repeatable.
In the previous post I introduced the {tomtom} package and showed how it can be used for geographic routing. Now we’re going to look at the traffic statistics returned by the TomTom API.
I’ve got a few CI/CD jobs running on GitLab that produce long logs, which in turn get truncated. Since the most interesting stuff normally happens towards the end of the logs (like errors!), this can be really counter-productive.
Job's log exceeded limit of 4194304 bytes.
There’s a fundamental problem with this though: if something’s going to break then it’s inevitably going to happen after the logs have been truncated so I won’t be able to actually see what’s broken.
I’m building a crawler which I’m going to wrap up in a Docker image. The crawler writes data to a remote MySQL database. However, there’s a catch: the database connection is via an SSH tunnel. Another wrinkle: the crawler is going to be run on ECS, so the whole thing (including setting up the SSH tunnel) needs to be baked into the Docker image.
This post illustrates the process of connecting to a remote MySQL data via a SSH tunnel from Docker. I’m not sure how secure this is. And there are probably better ways to do this. But it’s a start and it works!
Most people running a Linux system would agree that you should set up swap. According to the poll below, only 28% believe that no swap is required. And I think that they are misguided. Always put some swap on your system. You’ll never regret it.
This post shows an approach to using a rotating Tor proxy with Scrapy.
I’m using the scrapy-rotating-proxies download middleware package to rotate through a set of proxies, ensuring that my requests are originating from a selection of IP addresses. However, I need to have those IP addresses evolve over time too, so I’m using the Tor network.
Setup
I’ve got the following in the settings.py for my Scrapy project:
How much memory and CPU resources should be allocated to a simple Selenium crawler? I’ve been fudging these parameters but the time has come to man up and do this right.
I want my task to have sufficient resources that it’s able to perform its function. It should never be starved of resources! But, at the same, I also don’t want to extravagantly allocate excess resources. More resources → higher costs. I want to allocate the minimal resources to get the job done.
I’m busy helping a colleague with a Shiny application. The application includes HTML content rendered from a .Rmd document. However, there’s a catch: the .Rmd uses the {DT} package to render a dynamic DataTable. It turns out that this doesn’t immediately work because the JavaScript in the embedded document isn’t run.
I’ll use a simple document and application structure to illustrate the problem.
We’re developing some training about Apache Airflow and need to have a robust and portable environment for running demos and labs which we can make available to the class. This will reduce the frustration and time wasted getting everybody set up and ensure that everybody is working in the same environment.
We’re building a new training program around Apache Airflow. The major technical challenge with delivering this sort of program is ensuring that everybody in the class has access to a working version of the technology. Since there is generally a diverse range of setups (operating systems, corporate firewalls and personal configurations) this can really be a nightmare.
I’m building an automated reporting system which generates PDF reports. My approach is to use R Markdown to write the report and render to PDF using the excellent {pagedown} package.
An Application Load Balancer receives requests and distributes them across a selection of processing resources. These processing resources are divided into Target Groups (see previous post for how to set one up).
Creating an Application Load Balancer
We’re setting up a Flask API which is deployed as a Docker image and running on ECS. We’re going to create a load balancer which will accept requests on port 80 and route them to port 5000 on the API container.
If we want to have an ECS service which is visible to the public, then we need to set up an Application Load Balancer. There are a couple of steps to this process, the first of which is creating a Target Group.
We saw in a previous post that it’s important to ensure that the Selenium container is running and accepting requests before the crawler actually gets started. This is because the crawler depends on Selenium being available. We can use ECS task dependencies to assert this dependency.
In the previous post we saw how to deploy a simple Selenium crawler on ECS. However, the Docker image for the crawler was stored in a Docker Hub repository. Now we’re going to see how to use the AWS Elastic Container Registry (ECR) instead.