JKTEBOP version 44 is now available.
The main change is that a negative amount of third light is now possible.
Third light can be from -0.9 to +1.0.
This is useful for a few projects, but should not be used in general.
The JKTEBOP code is used to fit a model to the light curves of detached eclipsing binary stars in order to derive the radii of the stars as well as various other quantities. It is very stable and has a lot of additional goodies, including extensive Monte Carlo or bootstrapping error analysis algorithms, so is pretty much all you need to analyse an eclipsing binary light curve. It is also excellent for transiting extrasolar planetary systems.
All input and output is done by text files, making its operation as simple as possible. JKTEBOP has not been given a graphical interface or plotting capability; I prefer to use IDL or GNUPLOT to create plots from the JKTEBOP output.
JKTEBOP is written in almost-standard FORTRAN 77 using first the g77 compiler, then the ifort compiler, and now the gfortran compiler.
If you use JKTEBOP then please cite my paper on LL Aquarii. Other relevant papers are my first one on V453 Cygni and my first Homogeneous Studies one.
There are various comments in the code, but documentation is otherwise very limited (my apologies) so you will need to have a decent idea what you're doing. The original EBOP user manual is useful if you are able to get hold of a photocopy, and I hope to make it available electronically at some point. The code takes care of almost everything, so you are only required to ensure that you've given it decent data and reasonable initial parameter estimates.
The source code is available on its own as a text file or with example input and output files in a tarfile. The examples included are WASP-4 (transiting planet), WW Aurigae (eclipsing binary) and LL Aquarii (eclipsing binary with radial velocities). The previously available versions of the code are available for reference: v10, v15, v21, v25, v28, v34, v40 and v43.
I compile the code using the command:
gfortran -O3 -o jktebop jktebop.f
where the "-O3" is an optimisation flag.
NEW: I have made screencast about how to use JKTEBOP, using the eclipsing binary YZ Cas as an example. It is 22 minutes 35 seconds long and available on Youtube.
I run JKTEBOP from the command line on a Linux operating system (currently kubuntu 12.04). The command to run is:
jktebop [parameterinputfile]
(omitting the square brackets). An empty input file can be created for you to enter parameters in the correct places by typing
jktebop newfile
Input files: (1) a file containing the initial parameter values, and (2) a file with the observational data. The parameter file needs various parameters to be entered at the start of each line, and examples are available for download above. The input datafile contains the light curve to be fitted. Each line should contain a time and a magnitude. JKTEBOP tests the datafile to see if it has three columns of data, and if so the third column is read in and assumed to contain the observational error for each datapoint.
Output files: (1) a file containing the output parameters and other results, (2) a file with the best-fitting model light curve, (3) a file giving the input data and their residuals around the best fit. Some of the error analysis algorithms produce file (1) and a second file containing the results of every simulation.
JKTEBOP is based on the EBOP code written by Paul B Etzel. In my publications I generally cite Popper & Etzel (1981AJ.....86..102), Etzel (1981psbs.conf..111E), and/or Nelson & Davis (1972ApJ...174..617N). The latter reference is for the original model, which was significantly modified for EBOP by Paul Etzel.
Relevant references for JKTEBOP include:
Examples of the use of JKTEBOP can be found in most of my papers on detached eclipsing binaries and transiting extrasolar planetary systems.
Task 1 | This task inputs the effective temperatures and surface gravities of two stars and outputs limb darkening coefficients for them. The actual code for this has been split off and is now in a separate program (JKTLD). Task 1 is now just a system call to JKTLD. |
Task 2 | This inputs a parameter file and calculates a synthetic light curve (10000 points between phases 0 and 1) using the parameters you put in the file. |
Task 3 | This inputs a parameter file (containing estimated parameter values) and an observed light curve. It fits the light curve using Levenberg-Marquardt minimisation and produces an output parameter file, a file of residuals of the observations, and file containing the best fit to the light curve (as in Task 2). The parameter values have formal errors (from the covariance matrix found by the minimisation algorithm) but these are not overall uncertainties. You will need to run other tasks to get reliable parameter uncertainties. |
Task 4 | This inputs a parameter file and finds the best fit to the light curve. It then rejects datapoints which are distant from the fit (using a sigma value which you gave it) and refits the data. |
Task 5 | This inputs a parameter file and light curve and tries to find the global minimum by refitting the light curve many times from quite different sets of initial parameters (based on the ones you gave it). |
Task 6 | This inputs a parameter file and light curve and finds the best fit. For each adjusted parameter it then studies how the quality of fit changes as the parameter value is varied. This is a good way to find robust errors. |
Task 7 | This task inputs a parameter file, finds the best fit, and then uses bootstrapping to estimate the uncertainties in the parameters. An excellent description of bootstrapping can be found in Press et al (1993), chapter 15. Here, the light curve is randomly resampled (with replacement) many times and the resulting datasets individually refitted. The range in parameter values found gives the uncertainty in that parameter. I recommend using 10000 datasets (but 1000 is enough for many analyses). |
Task 8 | This inputs a parameter file, finds the best fit, and then uses Monte Carlo simulations to estimate the uncertainties in the parameters (again, see Press et al 1993, chapter 15). Here, the best-fitting light curve model is re-evaluated at the phases of the actual observations. Gaussian simulated observational noise is added and the result refitted. This process is repeated (again, I recommend 10000 times for final results) and the range in parameter values found gives the uncertainty in that parameter. Task 8 differs from Task 7 in that it explicitly assumes that the best fit to the observations is a good fit, but this is normally a decent assumption. |
Task 9 | This inputs a parameter file, finds the best fit, and then assesses the importance of correlated "red" noise on the parameters of the fit. The residual-shift method is used, where the residuals around the best fit are shifted point-by-point through the observational data (with ones which fall off the end of the dataset wrapping over to the beginning). After each shift a new best fit is calculated. Thus you end up with the same number of best fits as the light curve has datapoints. The 1σ errors are calculated, as with the Monte Carlo algorithm, by sorting the best fits and taking values which correspond to the central 68.3%. Each best shifted fit is outputted to a file and can therefore be plotted to see what actually happened - these plots can be very interesting. |
A page of Frequently Asked Questions is now available and is the first place to consult if you have problems.
Last modified: 2025/01/30 John Southworth (Keele University, UK)