Installing CPLEX
Quick notes on the process of installing the CPLEX optimiser.
Read More →Quick notes on the process of installing the CPLEX optimiser.
Read More →Quick notes on the process of installing the MOSEK optimiser.
Read More →Two previous posts considered using SciPy and CVXPY to solve a simple optimisation problem.
Read More →The previous post considered using SciPy to solve a simple optimisation problem.
Read More →SciPy is a general-purpose scientific computing library for Python, with an optimize
module for optimisation. This was used to solve the water tank reference problem. Both sequential and global solutions are presented below.
We will be considering two types of optimisation problem: sequential optimisation and global optimisation. These approaches can be applied to the same problem but will generally yield distinctly different results. Depending on your objective one or the other might be the best fit for your problem?
Read More →I’m evaluating optimisation systems for application to a large scale solar energy optimisation project. My primary concerns are with efficiency, flexibility and usability. Ideally I’d like to evaluate all of them on a single, well defined problem. And, furthermore, that problem should at least resemble the solar energy project.
Read More →In a previous post I looked at the HTTP request headers used to manage browser caching. In this post I’ll look at a real world example. It’s a rather deep dive into something that’s actually quite simple. However, I find it helpful for my understanding to pick things apart and understand how all of the components fit together.
Read More →In this post I’ll be testing the proxy service provided by NetNut. For a bit of context take a look at my What is a Proxy? post.
Read More →A proxy is a server or software that acts as an intermediary between a client (often a web browser) and one or more servers, typically on the internet. Proxies are used for a variety of purposes, including improving security, enhancing privacy, managing network traffic, and bypassing restrictions.
Read More →I recently migrated this blog from GitLab Pages to Vercel. There were two main reasons for the move:
For a side project I needed to scrape data for the NYSE Composite Index going back as far as possible.
Read More →In a previous post I looked at retrieving a list of assets from the Alpaca API using the {alpacar}
R package. Now we’ll explore how to retrieve historical and current price data.
How to list assets available to trade via the Alpaca API using the {alpacar}
R package.
The {alpacar}
package for R is a wrapper around the Alpaca API. API documentation can be found here. In this introductory post I show how to install and load the package, then authenticate with the API and retrieve account information.
A few days ago I wrote about a scraper for gathering economic calendar data. Well, I’m back again to write about another aspect of the same project: acquiring earnings calendar data.
Read More →Avoiding data duplication is a persistent challenge with acquiring data from websites or APIs. You can try to brute force it: pull the data again and then compare it locally to establish whether it’s fresh or stale. But there are other approaches that, if supported, can make this a lot simpler.
Read More →If you use Selenium for browser automation then at some stage you are likely to need to download a file by clicking a button or link on a website. Sometimes this just works. Other times it doesn’t.
Read More →I needed an offline copy of an economic calendar with all of the major international economic events. After grubbing around the internet I found the Economic Calendar on Myfxbook which had everything that I needed.
Read More →A few months ago I listened to an episode on the Founder’s Journal podcast that reviewed an essay, The Opportunity Cost of Everything, by Jack Raines. If you haven’t read it, then I suggest you invest 10 minutes in doing so. It will be time well spent.
Read More →