Fitting with lmfit

General-purpose fitting in Python can sometimes be a bit more challenging than one might at first suspect given the robust nature of tools like Numpy and Scipy. First we had leastsq. It works, although often requires a bit of manual tuning of initial guesses and always requires manual calculation of standard error from a covariance matrix (which isn't even one of the return values by default). Later we got curve_fit which is a bit more user friendly and even estimates and returns standard error for us by default! Alas, curve_fit is just a convenience wrapper on top of leastsq and suffers from some of the same general headaches.

These days, we have the wonderful lmfit package. Not only can lmfit make fitting more user friendly, but it also is quite a bit more robust than using scipy directly. The documentation is thorough and rigorous, but that can also mean that it can be a bit overwhelming to get started with it. Here I work through a basic example in two slightly different ways in order to demonstrate how to use it.

Generating the data

Let's assume we have data that resembles a decaying sine wave (e.g., a damped oscillator). lmfit has quite a few pre-defined models, but this is not one of them. We can simulate the data with the following code:

import numpy as np

x = np.linspace(0, 10, 100)
y = np.sin(5*x)*np.exp(-x/2.5)

Real data is noisy, so ...

more ...

Using Postgres as a time series database

Time series databases (TSDBs) are quite popular these days. To name a few, there are InfluxDB, Graphite, Druid, Kairos, and Prometheus. All aim to optimize data storage and querying for time-based data, which is highly relevant in a physics labs where there are multitude of "metrics" (to borrow a phrase used frequently in TSDB documentation) that naturally lend themselves to time series representation: lab (and individual device) temperatures, vacuum chamber pressures, and laser powers, just to name a few. Ideally, one could log various data to one of these databases and then use a tool like Grafana to visualize it. Sadly, more traditional relational databases like SQLite and PostgreSQL are not (currently) supported by Grafana (although this is now being addressed by a datasource plugin in development).

Nevertheless, there are quite a few reasons to favor a traditional RDBMS over a newfangled TSDB. To name a few:

  • Longevity: SQL has been around since the 1970s and became standardized in the 1980s.
  • Ubiquity: almost every server (web or otherwise) has an instance of SQL installed. If not, SQLite doesn't even require a server!
  • Community: not to suggest there aren't good communities with TSDBs, but the Postgres and SQLite communities in particular are generally quite helpful. Combined with the longevity aspect, any question one may have about how to accomplish a particular task with a SQL database is likely to be easily answerable with a simple web search.

In this post, I will outline a few things I have learned ...

more ...