Day 34: Networking
Social links and a link to my CV.
Public datasets:
The recipe below works for Light Table v. 0.7.2 and Julia v. 0.4.0. It might work for other versions too, but these are the ones I can vouch for.
Read More →I read Data Mining with Rattle and R by Graham Williams over a year ago. It’s not a new book and I’ve just been tardy in writing up a review. That’s not to say that I have not used the book in the interim: it’s been on my desk at work ever since and I’ve dipped into it from time to time. As a reference for ongoing analyses it’s an extremely helpful resource.
Read More →
Today we’ll be looking at the Distances package, which implements a range of distance metrics. This might seem a rather obscure topic, but distance calculation is at the core of all clustering techniques (which are next on the agenda), so it’s prudent to know a little about how they work.
Read More →
It’s all very well generating myriad statistics characterising your data. How do you know whether or not those statistics are telling you something interesting? Hypothesis Tests. To that end, we’ll be looking at the HypothesisTests package today.
Read More →
Today I’m looking at the Distributions package.
Read More →
Julia has native support for calling C and Fortran functions. There are also add on packages which provide interfaces to C++, R and Python. We’ll have a brief look at the support for C and R here. Further details on these and the other supported languages can be found on GitHub.
Read More →
Sudoku-as-a-Service is a great illustration of Julia’s integer programming facilities. Julia has several packages which implement various flavours of optimisation: JuMP, JuMPeR, Gurobi, CPLEX, DReal, CoinOptServices and OptimPack. We’re not going to look at anything quite as elaborate as Sudoku today, but focus instead on finding the extrema in some simple (or perhaps not so simple) mathematical functions. At this point you might find it interesting to browse through this catalog of test functions for optimisation.
Read More →
There’s a variety of options for plotting in Julia. We’ll focus on those provided by Gadfly and Plotly.
PhysicalConstants is a Julia package which has the values of a range of physical constants. Currently MKS and CGS units are supported.
Read More →
R has an extensive range of builtin datasets, which are useful for experimenting with the language. The RDatasets package makes many of these available within Julia. We’ll see another way of accessing R’s datasets in a couple of days’ time too. In the meantime though, check out the documentation for RDatasets and then read on below.
First install the SQLiteODBC and unixODBC packages. Have a quick look at the documentation for unixODBC and SQLiteODBC.
Read More →This is a small package I put together quickly to satisfy an immediate need: generating abbreviated URLs in R. As it happens I require this functionality in a couple of projects, so it made sense to have a package to handle the details. It’s not perfect but it does the job. The code is available from GitHub along with vague usage information.
In essence the functionality is simple: first authenticate to shortening service (goo.gl and Bitly are supported at present) then shorten or expand URLs as required. The {longurl} package will perform the latter function too, possibly with greater efficiency.
The previous post looked at metaprogramming in Julia, considering how to write code that will generate or modify other code. Today’s post considers a somewhat less esoteric, yet powerful topic: Parallel processing.
As opposed to many other languages, where parallel computing is bolted on as an afterthought, Julia was designed from the start with parallel computing in mind. It has a number of native features which lend themselves to efficient implementation of parallel algorithms. It also has packages which facilitate cluster computing (using MPI, for example). We won’t be looking at those, but focusing instead on coroutines, generic parallel processing and parallel loops.
Read More →Your code won’t be terribly interesting without ways of getting data in and out. Ways to do that with Julia will be the subject of today’s post.
Direct output to the Julia terminal is done via print() and println(), where the latter appends a newline to the output.
julia> print(3, " blind "); print("mice!\n")
3 blind mice!
julia> println("Hello World!")
Hello World!Terminal input is something that I never do, but it’s certainly possible. readline() will read keyboard input until the first newline.
Yesterday I had a look at Julia’s support for Functional Programming. Naturally it also has structures for conventional program flow like conditionals, iteration and exception handling.
Read More →An earlier post looked at how to work with functions in Julia. This time we’ll dig into Functional Programming, an approach to coding which, not surprisingly, depends on writing functions.
Read More →
Julia performs Just-in-Time (JIT) compilation using a Low Level Virtual Machine (LLVM) to create machine-specific assembly code. The first time a function is called, Julia compiles the function’s source code and the results are cached and used for any subsequent calls to the same function. However, there are some additional wrinkles to this story.
Read More →The previous post considered a selection of development environments for working with Julia. Now we’re going to look at a topic which is central to almost every programming task: variables.
Most coding involves the assignment and manipulation of variables. Julia is dynamically typed, which means that you don’t need to declare explicitly a variable’s data type. It also means that a single variable name can be associated with different data types at various times. Julia has a sophisticated, yet extremely flexible, system for dealing with data types. covered in great detail by the official documentation. My notes below simply highlight some salient points I uncovered while digging around.
Read More →As a long-term R user I’ve found that there are few tasks (analytical or otherwise) that R cannot immediately handle. Or be made to handle after a bit of hacking! However, I’m always interested in learning new tools. A month or so ago I attended a talk entitled Julia’s Approach to Open Source Machine Learning by John Myles White at ICML in Lille, France. What John told us about Julia was impressive and intriguing. I felt compelled to take a closer look. Like most research tasks, my first stop was the Wikipedia entry, which was suitably informative.
Read More →Reading Bayesian Computation with R by Jim Albert (Springer, 2009) inspired a fit of enthusiasm. Admittedly, I was on a plane coming back from Amsterdam and looking for distractions. I decided to put together a Shiny app to illustrate successive Bayesian updates. I had not yet seen anything that did this to my satisfaction. I like to think that my results come pretty close.
Read More →As an aside for a Social Media Automation project I have constructed a bot which uses data from the World Wide Lightning Location Network (WWLLN) to construct daily animated maps of global lightning activity and post them on my Twitter feed. The bot runs remotely and autonomously on an EC2 instance.
Read More →Word clouds have become a bit cliché, but I still think that they have a place in giving a high level overview of the content of a corpus. Here are the steps I took in putting together the word cloud for the International Conference on Machine Learning (2015).
Read More →Sundry notes from the fourth day of the International Conference for Machine Learning (ICML 2015) in Lille, France. Some of this might not be entirely accurate. Caveat emptor.
Colour modelled as a 4 dimensional vector. The Physics (Planck’s Law) places some constraints on the components of these vectors. Light density model accounts for rotation as well as asymmetry of galactic axes.
Read More →Selected scribblings from the third day at the International Conference for Machine Learning (ICML 2015) in Lille, France. I’m going out on a limb with some of this, since the more talks I attend, the more acutely aware I become of my limited knowledge of the cutting edge of Machine Learning. Caveat emptor.
Belief Propagation describes the passage of messages across a network. The focus of this talk was Belief Propagation within a tree. The authors consider an adaptive algorithm and found that their technique, AdaMP, was significantly better than the current state of the art algorithm, RCTreeBP.
Read More →Some notes from the second day at the International Conference for Machine Learning (ICML 2015) in Lille, France. Don’t quote me on any of this because it’s just stuff that I jotted down during the course of the day. Also much of the material discussed in these talks lies outside my field of expertise. Caveat emptor.
Machine Learning is an Exact Science. It’s also an Experimental Science. It’s also Engineering.
Read More →Started the day with a run through the early morning streets of Lille. This city seems to wake up late because it was still nice and quiet well after sunrise. Followed by a valiant attempt to sample everything on the buffet breakfast. I’ll know where to focus my attention tomorrow.
The first day of the International Conference on Machine Learning (ICML 2015) in Lille consisted of tutorials in two parallel streams. Evidently the organisers are not aware of my limited attention span because these tutorials each had nominal durations of longer than 2 hours, punctuated by a break in the middle.
Read More →
“Machine Learning with R Cookbook” by Chiu Yu-Wei is nothing more or less than it purports to be: a collection of 110 recipes for applying Data Analysis and Machine Learning techniques in R. I was asked by the publishers to review this book and found it to be an interesting and informative read. It will not help you understand how Machine Learning works (that’s not the goal!) but it will help you quickly learn how to apply Machine Learning techniques to you own problems.
Read More →