Fitting with lmfit

General-purpose fitting in Python can sometimes be a bit more challenging than one might at first suspect given the robust nature of tools like Numpy and Scipy. First we had leastsq. It works, although often requires a bit of manual tuning of initial guesses and always requires manual calculation of standard error from a covariance matrix (which isn't even one of the return values by default). Later we got curve_fit which is a bit more user friendly and even estimates and returns standard error for us by default! Alas, curve_fit is just a convenience wrapper on top of leastsq and suffers from some of the same general headaches.

These days, we have the wonderful lmfit package. Not only can lmfit make fitting more user friendly, but it also is quite a bit more robust than using scipy directly. The documentation is thorough and rigorous, but that can also mean that it can be a bit overwhelming to get started with it. Here I work through a basic example in two slightly different ways in order to demonstrate how to use it.

Generating the data

Let's assume we have data that resembles a decaying sine wave (e.g., a damped oscillator). lmfit has quite a few pre-defined models, but this is not one of them. We can simulate the data with the following code:

import numpy as np

x = np.linspace(0, 10, 100)
y = np.sin(5*x)*np.exp(-x/2.5)

Real data is noisy, so let's add …

more ...

Using Postgres as a time series database

Time series databases (TSDBs) are quite popular these days. To name a few, there are InfluxDB, Graphite, Druid, Kairos, and Prometheus. All aim to optimize data storage and querying for time-based data, which is highly relevant in a physics labs where there are multitude of "metrics" (to borrow a phrase used frequently in TSDB documentation) that naturally lend themselves to time series representation: lab (and individual device) temperatures, vacuum chamber pressures, and laser powers, just to name a few. Ideally, one could log various data to one of these databases and then use a tool like Grafana to visualize it. Sadly, more traditional relational databases like SQLite and PostgreSQL are not (currently) supported by Grafana (although this is now being addressed by a datasource plugin in development).

Nevertheless, there are quite a few reasons to favor a traditional RDBMS over a newfangled TSDB. To name a few:

  • Longevity: SQL has been around since the 1970s and became standardized in the 1980s.
  • Ubiquity: almost every server (web or otherwise) has an instance of SQL installed. If not, SQLite doesn't even require a server!
  • Community: not to suggest there aren't good communities with TSDBs, but the Postgres and SQLite communities in particular are generally quite helpful. Combined with the longevity aspect, any question one may have about how to accomplish a particular task with a SQL database is likely to be easily answerable with a simple web search.

In this post, I will outline a few things I have learned in using …

more ...

Importing one Mercurial repository into another

In the ion trap group, we usually use Mercurial for version controlling software we write for experimental control, data analysis, and so on. This post outlines how to import the full history of one repository into another. This can be useful for cases where it makes sense to move a sub-project directly into its parent, for example.

Convert the soon-to-be child repository

With the Mercurial convert extension, you can rename branches, move, and filter files. As an example, say we have a repo with only the default branch which is to be imported into a super-repository.

For starters, we will want all our files in the child repo to be in a subdirectory of the parent repo and not include the child's .hgignore. To do this, create a file filemap.txt with the following contents:

rename . child
exclude .hgignore

The first line will move all files in the repo's top level into a directory named child.

Next, optionally create a branchmap.txt file for renaming the default branch to something else:

default child-repo

Now convert:

hg convert --filemap branchmap.txt --branchmap branchmap.txt child/ converted/

Pull in the converted repository

From the parent repo:

hg pull -f ../converted

Ensure the child commits are in the draft phase with:

hg phase -f --draft -r <first>:<last>

Rebase as appropriate

hg rebase -s <child rev> -d <parent rev>

To keep the child's changed branch name, use the --keepbranches option.


  • …
more ...

Running (possibly) blocking code like a Tornado coroutine

One of the main benefits of using the Tornado web server is that it is (normally) a single-threaded, asynchronous framework that can rely on coroutines for concurrency. Many drivers already exist to provide a client library utilizing the Tornado event loop and coroutines (e.g., the Motor MongoDB driver).

To write your own coroutine-friendly code for Tornado, there are a few different options available, all requiring that you somehow wrap blocking calls within a Future so as to allow the event loop to continue executing. Here, I demonstrate one recipe to do just this by utilizing Executor objects from the concurrent.futures module. We start with the imports:

import random
import time
from tornado import gen
from tornado.concurrent import run_on_executor, futures
from tornado.ioloop import IOLoop

We will be using the run_on_executor decorator which requires that the class whose methods we decorate have some type of Executor attribute (the default is to use the executor attribute, but a different Executor can be used with a keyword argument passed to the decorator). We'll create a class to run our asynchronous tasks and give it a ThreadPoolExecutor for executing tasks. In this contrived example, our long running task just sleeps for a random amount of time:

class TaskRunner(object):
    def __init__(self, loop=None):
        self.executor = futures.ThreadPoolExecutor(4)
        self.loop = loop or IOLoop.instance()

    def long_running_task(self):
        tau = random.randint(0, 3)
        return tau

Now, from within a coroutine, we can let the tasks run as …

more ...

Background tasks with Tornado

I have been using Tornado lately for distributed control of devices in the lab where an asynchronous framework is advantageous. In particular, we have a HighFinesse wavelength meter which we use to monitor and stabilize several lasers (up to 14 at a time). Previously, a custom server for controlling this wavemeter was written using Twisted, but that has proven difficult to upgrade, distribute, and maintain.

One thing that is common for such a control scenario is that data needs to be refreshed continuously while still allowing incoming connections from clients and appropriately executing remote procedure calls. One method would be to periodically interrupt the Tornado IO loop to refresh data (and in fact, Tornado has a class to make this easy for you in tornado.ioloop.PeriodicCallback). This can be fine if the data refreshing does not take too much time, but all other operations will be blocked until the callback is finished, which can be a problem if the refreshing operation is slow. Another option is to have an additional thread separate from the Tornado IO loop that handles refreshing data. This certainly works, but adds the complexity of needing to use thread-safe communications to stop the thread when the main application is shut down or when other tasks depend on the successful completion of the refresh.

Luckily, Tornado also includes a decorator, tornado.concurrent.run_on_executor, to run things in the background for you using Python's concurrent.futures module (which is standard starting in Python 3.3 and backported …

more ...