Epoch Ravine (alpha) Mac OS

Posted on  by
  1. Epoch Ravine (alpha) Mac Os Catalina
  2. Epoch Ravine (alpha) Mac Os 11

Note: The headings on this list indicate the Macintosh System bundle names; the bullet points indicate the version of the System File included in that bundle. This is to make it clearer for people searching for specific bundle versions as opposed to System File versions. Finder File versions are not indicated. 1 Classic Mac OS 1.1 Macintosh System Software (0 - 0.3) 1.1.1 System File 1 1.1.2. The Happy Mac icon was the normal bootup screen of the Macintosh, it was also used on floppy disks of System softwares. The Mac OS logo and rebranding marks it's debut on version 7.5.1. Released on March 23, 1995. This logo was used for the Mac OS X v10.0, code-named 'Cheetah', and v10.1, code-named 'Puma'. These versions were preceded by the Mac OS X Public Beta, code-named 'Kodiak'. Analysis Scoring & Automation 3 AcqKnowlege File Portability Windows Mac OS 10.3 or higher. Use Specialized Analysis to analyze AcqKnowledge data files collected on MP System (MP150 or MP36R)s running on Windows/PC or Mac OS X. Specialized Analysis allows.

Epoch Transmitters amplify and transmit high-fidelity EEG or EMG (2-channel) or ECG (single-channel) data Small implants are uniquely suited for neonatal and adult rodent recording two-week transmitter (EEG only) suitable for pre-weaned rodents less than postnatal day 21 (mice as young as P10, rat P6).


Change Note
V2.0.0 11-Feb-91 Release of CDF V2.0.
V2.1.0 7-Jun-91 Release of CDF V2.1.
V2.2.0 20-May-92 Release of CDF V2.2.
V2.3.0 1-Oct-92 Release of CDF V2.3.
V2.4.0 26-Jan-94 Release of CDF V2.4.
V2.5.0 21-Dec-94 Release of CDF V2.5.
V2.6.0* 23-Feb-96 Alpha release of CDF V2.6.
V2.6.0# 5-Apr-96 Beta release of CDF V2.6.
V2.6.0& 13-Jun-96 Beta release of CDF V2.6.
V2.6.0$ 17-Jun-96 Beta release of CDF V2.6.
V2.6.0@ 15-Aug-96 Beta release of CDF V2.6.
V2.6.0 10-Oct-96 Release of CDF V2.6.
V2.7.0 27-Sep-99 Release of CDF V2.7.
V2.7.1 16-May-01
  • Add new ports: Solaris on PC, MacOS X and Linux on DEC/Alpha.
  • Correct bugs in CDFDump program.
  • Correct and enhance CDF-Java APIs and Java Native Interface (JNI).
  • Add new Java-based tool programs for CDFEdit and CDFExport.
  • Add variable name checking in CDFconvert for compression option.
  • Add Cygwin port for win32 on PC. Create individual tool programs for Windows using Cygwin libraries.
05-Dec-01 Add a new set of APIs: CDFgetrVarsRecordData, CDFgetzVarsRecordData, CDFputrVarsRecordData and CDFputzVarsRecordData to allow a full single record read/write for a group of r/zVariables.
26-Jan-02 Correct a bug in cdfcmp.c for showing dimension size differences between two zVariables.
28-Jun-02 Add a new option in cdfcmp.c to allow tolerance checks while comparing two unequal data values.
22-Jul-02 Handle f77 Fortran for Cygwin.
25-Nov-02
  • Add 64-bit mode for Solaris/sparc64 for sparcv9.
  • gcc for 64-bit is added.
V2.7.2 08-Apr-04 Change the way that current variable select is being handled. Keep the current variable selected offset as a reference for any following variable selection. Originally, each time a selection of a variable or getting a variable's field data will start the search for that variable ALWAYS from the beginning of the variable list. It requires too many I/Os for accessing a sequence of variables.
04-May-04 Corrected a bug that caused the f77 to fail under 64-bit environment.
V3.0 07-Jan-05
  • Changed file offset from type long (32-bit) to off_t (64-bit) for the platform on which the data type is supported. Changed the file I/O functions accordingly. The CDF internal file structures are changed. Made the library backward compatible, meaning programs created from V3.0 can still access (read/update) the CDF files of older version.
  • Expanded the length of variable and attribute name from 64 to 256.
  • The CDF/Java was modified to ensure that concurrent accessing a CDF would be thread-safe.
  • Added a new data type, CDF_EPOCH16, to accommodate more refined time resolution within a second. A new set of functions, similar to those of CDF_EPOCH data type, was added.
  • Modified cdfdump tool program to be more efficient in data reading.
25-Mar-05 Changed JNI and Java-CDF APIs to ensure proper operations within the multiple-threaded environment.
V3.1.0 27-May-05 Added new sets of APIs to allow Standard Interface to interact with zVariables and other CDF-related information.
11-Jul-05 Added MingW port for PC.
05-Aug-05 Added new functions to allow creating CDF files in older, V2.7, version, not just the default V3.1.
18-Jan-06 Modified code, including tools, to handle file path that has '.cdf' or '.skt' extension while it is not expected to be there.
30-Jan-06 Add FreeBSD port for PCs.
1-Feb-06 Added Intel C++ and Fortran compilers for Linux port on PCs.
22-Jun-06
  • Added support for HPUX and AIX for both 32 and 64-bit mode.
  • Allow Mac OS X to build code for PPC and i386.
V3.1.1 12-Oct-06
  • Modified to allow upper/lower case CDF name for Windows.
  • Changed Epoch to make 9999-12-31 23:59:999 as encoded date for the filled value of -1.0E31. For EPOCH16, a pair of -1.0E31 filled values is encoded as 9999-12-31 23:59:999:999:999:999.
V3.2.0 21-Oct-06 Added MD5 checksum feature for data integrity check of the CDF. Modified tools to use the checksum feature. Added a couple of new tool programs: cdfdump and cdfmerge. Renamed the original cdfdump to cdfirsdump.
25-Apr-07 Changed the default size of cache buffers from 512 to 10240 bytes to improve data access performance.
18-Jun-07 Enhanced READONLY mode to improve meta-data access performance. When READONLY mode is selected, all meta-data is read and stored in internal data structures which are then accessed whenever meta-data is requested.
V3.2.1 24-Apr-08 Modified the library so a potential buffer overflow vulnerability when reading specially-crafted (invalid) CDF can be avoided.
V3.2.2 10-Aug-08 A maintenance release. Modified the Java tools so they can handle CDF files with space(s) in the file path. cdfmerge tool was modified to allow merging 'Epoch' variable data just like other variables. Also, corrected a couple of bugs related to READONLY mode functions that cause memory leak and fail to find attribute entries.
V3.2.3 24-Nov-08 Modified CDFdump to add a new output option.
V3.2.4 20-Jan-09 Bug fixed for checksum option running on 64-bit machines.
V3.3.0 10-Jun-09
  • Added an optional process to validate data fields in a CDF when it is open. This process addresses a potential issue of library becoming vulnerable when a compromised CDF is accessed.
  • A new tool, cdfvalidate, is added.
  • All CDF tools, except cdfconvert, will have sanity checks on.
  • Added Linux running on PPC.
  • Modified CDF-Java code and JNI to properly handle multi-dimensional variables in COLUMN-major file, and preserve variable's dimensionality if a non-variant dimension exists.
V3.3.1 5-May-11
  • Added several new features to cdfexport tool program.
  • Bug fixed in Java-CDF APIs and a couple of new methods were added.
  • New features were added to cdfdump to allow selected variables and a range of records to be dumped.
  • MingW port was revised to handle 'pdcurses', if installed, for the curses-based tools.
  • Fixed cdf validation to allow some V2.0 files to be valid.
  • Bugs fixed in the core library, Java-CDF APIs and tools.
1-Sep-11 (V3.3.1.1) Fixed 64-bit Fortran on Solaris/Intel port. Changed release sub-increment from an character to an alphanumeric number. Fixed Java and C# APIs. Fixed library bug.
V3.3.2 5-Sep-11 (V3.3.2.1-2)
  • Added CDF_INT8 and CDF_TIME_TT2000 data types.
  • Implemented leap seconds into CDF_TIME_TT2000.
  • New features were added to handle epoch values in/to CDF_TIME_TT2000: including encoding, decoding, parsing, computing and breakdown.
  • Bugs fixed in the core library.
9-Jan-12 (V3.3.2.3)
  • Email address changed for cdfsupport.
  • Bugs fixed in the core library and other APIs.
V3.4.0 1-Mar-12 (V3.4.0.0)
  • Convert V3.3.2.3 to new version.
  • Reset pad values for INT8 and TT2000 data types.
V3.4.1 10-May-12 (V3.4.1.0)
  • Bugs fixed in core library.
  • Minor changes in skt2cdf tool program and cdfdump help.
  • Fixed cdfmerge while using text file for input files control.
  • Modified Java CDF class's open method to delay collecting variable and attribute data.
V3.5.0 10-Oct-12 (V3.5.0.0 beta)
  • Bugs fixed in core library and tool programs.
  • Used open source zlib source code to replace CDF's modified GZIP compression/decompression from original code by Jean-loup Gailly and Mark Adler.
  • Modified the default blocking factor for compressed variable data to improve performance.
15-Sep-13 (V3.5.0.1)
  • Fixed minor memory leaks in core library and JNI.
  • Added a new option to cdfdump tool for how to show the floating-point values if FORMAT entry is missing.
  • Added support for Visual Basic on Windows.
  • Pad value is set when a variable is created.
  • Fixed code to save CDFid for 64-bit Fortran when a CDF is open/created, even only 4-byte of it is used in Fortran code.
  • Default pad values are set with rather invalid values, so they can be recognized easily.
31-Mar-14 (V3.5.0.2)
  • Use the latest open source zlib version 1.2.8.
  • Properly handle writing string typed pad value that could contain garbage if the length of value is less than the variable's defined number of elements.
  • Added support for NaN, INF, -INF for floating point values.
  • Minor bugs fixed.
V3.6.0 01-Mar-15 (V3.6.0.3)
  • Added a new leap second for 07-01-2015 in the leap second table.
  • Added a new header field in a CDF to record the leap second table the file is based upon. A set of get/set operations were implemented in all APIs.
  • Added a new record delete option in the library, which will rearrange remaining records for sparse record variables.
  • Skipped checking the given CDF file name for ASCII for reading. Only enforce the file name to be ASCII, but not the directory portion, if provided, for creating a CDF.
  • Tool cdfconvert has a few new options: 1. sort the keyed variable, e.g., Epoch, for the output file, 2. globally reset the blocking factor, 3. reset the leap second last updated date, 4. adjust the TT2000 time.
  • Tool cdfdump was modified to detect whether a CDF is a valid one, even at the variable data level.
  • Minor bugs fixed.
10-Jun-15 (V3.6.0.4)
  • Modified the cdfjava jar file. Extended the CDF status message length.
V3.6.1 20-Sep-15 (V3.6.1.0)
  • Used the preserved system temporary folder, e.g. /tmp for Linux/Unix/MacOSX, to hold the temporary file(s) while doing compression/decompression.
  • Set temporary file name using the random number generator with process id and current time as the seed.
  • Added support for computing TT2000 from UTC if the passed day is DOY (day of the year from January 1st).
  • Added a few options to cdfconvert tool program.
V3.6.2 20-Mar-16 (V3.6.2.0)
  • Modified to Makefile and installation process to support Mac OS X El Captain (10.11).
  • Modified the library to handle string data that has a shorter length than defined (number of elements) when reading/writing. It will be filled with spaces, starting from the NUL.
15-May-16 (V3.6.2.1)
  • Bug fixed for computing TT2000 time if only year/month/day components are entered.
V3.6.3 5-Oct-16 (V3.6.3.0)
  • Added a new leap second for 1/1/2017.
  • Added a new set of CDFread C-based functions.
  • Added new features in cdfdump, cdfstats tool programs.
10-Jan-17 (V3.6.3.1)
  • Modified the data write functions for TT2000 data type to update the leap second last updated header field from the leap second table. This resolves a problem if a master/template CDF is used for creating CDFs as its header field value never changed.
V3.6.4 20-Mar-17 (V3.6.4.0)
  • Used C-based function to create the temporary files, in 'mycdftmp.XXXXXX' form, for compression/decompression.
  • Used a more orderly check for the directory to hold the temporary files.
  • Modified cdfexport to not truncate variable name for output.
  • Have a separate patch for supporting IDL 8.6.
V3.7.0 20-Apr-18 (V3.7.0.0)
  • Allowed Null-terminating string for variable data and attribute entries.
  • Allowed multiple strings for variable attribute entry.
  • Added support for ARM architecture.
  • Added Itanium IA64 on OpenVMS.
  • Added pure Java package, cdfj.jar, for CDF read/write.
  • Enhanced the CDF XML schema.
V3.7.1 29-May-19 (V3.7.1.0)
  • Added a set of more generalized CDF epoch data encoding and parsing functions. Modified the default encoded epoch data to be of ISO 8601 form.
  • Added a new set of CDF epoch data conversion to/from Unix time.
  • Modified cdfconvert tool program to set the rVariables' dimension to zero (0), when converting a CDF with zMode 2. Simialar change is also applied to cdfexport tool program.
  • Enhanced the CDF XML schema.
  • Bugs fixed.
V3.8.0 6-Mar-20 (V3.8.0.0)
  • Modified the code to use a variable's FILLVAL, replacing its PAD value, for all variable's missing data if it exists, provided its data type being equivalent to its variable's.
  • String typed variable's pad value is filled with a single space and followed by NUL(s), instead of multiple spaces.
  • Changed skt2cdf tool program to allow handling a skeleton table directly from Windows (with rn at the end in each line) on non-Windows systems.
  • Added new options to cdfconvert tool program to
    • Remove non-varying dimension(s) from the source zVariable(s) if it does not have DEPEND_* attribute defined to the destination variable.
    • Replace any pad value(s) in a variable's data by its FILLVAL value, if FILLVAL attribute exists and has an equivalent data type as variable's.
  • Modified the tool programs to use FORMAT attribute to encode both data and metadata if the format is to be used. These include all C-based and Java-based tools.
  • A new option is added to cdf2skt tool to allow users to choose how to display variable's metadata and data, either with or witout format.
  • Modified the default encoded epoch data to be of ISO 8601 form.
  • Bugs fixed.
9-Jul-20 (V3.8.0.1)
  • Modified the CDF epoch breakdown functions that might cause incorrect day being returned.

EXPRES Stellar-Signals Project

Overview

The EXPRES Stellar-Signals Project offers spectroscopic and photometric data of several stars. EXPRES (the EXtreme-PREcision Spectrograph) is an environmentally stabilized, fiber-fed, R=137,500 optical spectrograph that has demonstrated sub-m/s RV precision [4,10,11]. We are offering this high-fidelity data to the wider community in order to allow different groups to utilize their expertise and innovations towards disentangling center-of-mass motion of a star and photospheric velocities embedded in the data.

This webpage contains data links, project guidelines, and data tips. Any questions or feedback can be sent to Lily Zhao (lily.zhao@yale.edu).

Initially, extracted EXPRES data of HD 101501 will be released, followed by data from HD 34411, HD 217014 (i.e. 51 Peg), and HD 10700 (i.e. tau Ceti). In addition to spectroscopic data, we will also provide photometry from APT [2], radial-velocity (RV) measurements, and classic activity indicators. RVs and activity indicators will additionally be provided in a DACE compatible file.

EXPRES data are meant to serve as an example of the data being produced by next-generation spectrographs. The ultimate goal of this project is to publish a report summarizing the different methods currently in use and their relative benefits. A description of the project and its intent has been codified in a AAS research note [14], which should be cited in reference to this data release.

The EXPRES project is supported by NASA XRP 80NSSC18K0443, NSF AST-1616086, the Heising-Simons Foundation, and an anonymous donor in the Yale community.

The Data on Offer

Each EXPRES observation has an approximate per-pixel SNR of 250 at 550 nm, which translates to about 30 cm/s formal RV errors. Most stars have three or four consecutive observations per night, for an equivalent SNR of about 500, and 20+ nights of observations.


Data for HD 101501, which exhibits clear evidence of stellar activity, will be released first to serve as a test case both of the data and project guidelines. Following this test case, data for HD 34411, HD 217014, and HD 10700, including the most up-to-date data collected during the test period with HD 101501.

Data are hosted on Yale Box. Please e-mail Debra Fischer (debra.fischer@yale.edu) and Lily Zhao (lily.zhao@yale.edu) to request access.

  1. HD 101501, G8V B, (log R'_{HK}) = -4.54
    (45 observations, 22 nights, Feb. 10, 2019 - Nov. 26, 2020)
  2. HD 26965, K1V, (log R'_{HK}) = -5.01
    (114 observations, 37 nights, Aug. 20, 2019 - Nov. 27, 2020)
  3. HD 10700, i.e. tau Ceti, G8V, (log R'_{HK}) = -4.98
    (174 observations, 34 nights, Aug. 15, 2019 - Nov. 27, 2020)
  4. HD 34411, G0V, (log R'_{HK}) = -5.09
    (188 observations, 58 nights, Oct. 08, 2019 - Nov. 27, 2020)

Note: HD 26965 has taken the place of HD 217014 for targets on offer. HD 26965 has more data of higher quality and is more scientifically interesting.

For each star, we offer:

  • Raw data upon request
  • Meta-data (e.g. airmass, moon distance, etc.), included in the primary FITS extension header
  • Telemetry (e.g. temperatures, pressures, etc.) upon request
  • Extracted data [9], including
    • Wavelengths (+ chromatic barycentric corrected wavelengths [8])
    • Flat-relative extracted spectrum
    • Continuum model
    • Blaze function
    • SELENITE telluric model [7]
  • Merged 1D Spectra
  • Cross-Correlation Functions
  • Radial velocities
    • MJD (of the photon-weighted midpoint)
    • Instrument Epoch
    • RVs
    • RV Uncertainties
  • Activity indicators
    • S index
    • H-α emission
    • H-α equivalent width
    • Cross correlation function's (CCF) FWHM
    • Measurement of the skew in the CCF's bisector
    • Velocity span of the CCF
    • Asymmetry parameter of a Bi-Gaussian fit to the CCF
    • Asymmetry parameter of a skew normal Gausisan fit to the CCF
  • APT photometry [2]

Project Guidelines

To aid the EPRV community in the ever-important fight to disentangle center-of-mass motion (due to planets) from photospheric velocities (due to stellar variability/activity), we will compile a final report of this project. This report will summarize the different methodologies, their relative merits, and their data requirements. In order to analyze the results in a consistent way, we request that for each method, groups submit:

  • Activity-adjusted/removed RVs (i.e. cleaned RVs)
  • RV signal modeled out as attributed to either activity or noise
  • Numerical value of any indicators used
  • For GPs: choice of kernel and best-fit hyper parameters

These values will be used to derive the RMS of cleaned, activity-less RVs and the presence (or lack thereof) of activity-driven periodicities for all methods. Participants are encouraged to publish their own papers using the provided EXPRES data and whatever analysis they like.

In working towards a cohesive report and ease of analysis, we provide Submission Guidelines and Method Questions. The Submission Guidelines, which can be found in the following document, specify what each participating group should submit and standardizes file locations, file names, and table structure.

Submission Instructions

We also provide Method Questions to give more detailed information about each method a group tries. These questions focus on 1) a general description of the method, 2) the data requirements for the method, 3) the process of applying the method, and 4) the results of the method. Templates in both LaTeX and Word Document formats are linked below. These templates list all method questions with spaces for answers.

Method Questions [Latex]

Method Questions [Microsoft Word]

Project Timeline

The target submission date for the report is February 2021. Results from each method on all data sets should therefore be submitted by end of December 2020, allowing a month for the report to be written, reviewed, and revised by everyone involved.

Initially, we are providing only the HD 101501 data to serve as a test run, in order to gauge if changes or adjustments are needed of either the data or the project guidelines. The other three data sets, which will include the most up-to-date data, will be made available following this initial run. A detailed timeline is given below.

  • Aug. 31, 2020: Information about the project and data links are made public.
  • Sept. 14, 2020: Groups must confirm participation by sending a group name, a table of group members, and a rough estimate of number of methods to be tried to Debra Fischer (debra.fischer@yale.edu) and Lily Zhao (lily.zhao@yale.edu).
  • Oct. 9, 2020: After some time with the data, a check-in meeting among all participants is held to cover any questions or clarifications concerning the data or the project guidelines.
  • Oct. 31, 2020: Groups submit descriptions of all methods and results for HD 101501, revealing any tricks in the data or guidelines and allowing the rest of the project to be more of a treat.
  • Nov. 13, 2020: A second check-in meeting is held with all participants to debrief the results of the test run with HD 101501 and identify possible areas for collaboration. Three additional data sets will be released following this meeting.
  • Feb. 15, 2021 : Deadline to submit final results/descriptions for all data sets on Yale Box.
  • Mar. 14, 2021 : Deadline to submit final results for the injected planet signals test on Yale Box.
  • March: A third check-in meeting to discuss conclusions from the project as well as the final report and its scope.

While there are scheduled check-ins for participants to ask questions or raise suggestions, feedback is always welcome and can be sent to Lily Zhao (lily.zhao@yale.edu).

Working with the Extracted Data

The FITS file of each observation contains 3 HDUs. The primary HDU (i.e. index 0) contains the header with documented metadata (e.g. airmass). The first extension HDU (i.e. index 1) contains the optimally extracted data. The second extension (i.e. index 2) contains data from the low-resolution chromatic exposure meter spectrograph and barycentric correction information.

For example, to access the optimally extracted, 2-D spectrum in python, one would use:

To read in the same data using IDL:

The data variable then contains the following information:

  • wavelength gives the wavelengths for each pixel across the entire detector.
  • bary_wavelength gives the chromatic-barycentric-corrected wavelengths for each pixel [8]
  • spectrum gives the extracted values (Note: the EXPRES pipeline uses a flat-relative optimal extraction method, meaning these values are relative to a flat image constructed for the night. They are therefore unitless and do not correspond directly to any physical properties [9]).
  • uncertainty gives errors for each pixel that were propagated through the extraction. As such, when working with continuum normalized spectra, these uncertainties should also be continuum normalized.
  • continuum gives a fit to the continuum for each order.
  • blaze gives the counts of the median flat exposure that the extractions were done with respect to. Multiplying the spectrum by this blaze will return the original counts for each pixel. These counts, which are presumed Poisson distributed, provide the best measure of SNR for the exposure. Note that because this is simply a median flat exposure, it may contain some detector defects that would otherwise be flat-fielded out in the extracted exposures through the flat-relative extraction. Users are encouraged to fit a smooth function to this blaze before multiplying by the extracted spectrum to recover true counts.
  • tellurics gives the SELENITE-constructed telluric model specific to this exposure [7].
  • pixel_mask gives a mask that excludes pixels with too low signal to return a proper extracted value. This encompasses largely pixels on the edges of orders. The extracted values for these pixels are set to NaNs.
  • excalibur gives the wavelengths generated by Excalibur, a non-parametric, hierarchical method for wavelength calibration <[13]. These wavelengths suffer less from systematic or instrumental errors, but cover a smaller spectral range (~4775-7225 A for epoch 4 and ~4940-7270 A for epoch 5). We have found that the Excalibur wavelengths return RVs with lower scatter.
  • bary_excalibur gives the chromatic-barycentric-corrected Excalibur wavelengths for each pixel [8].
  • excalibur_mask gives a mask that excludes pixels without Excalibur wavelengths, which have been set to NaNs. These are pixels that do not fall between calibration lines in the order, and so are only on the edges of each order, or outside the range of LFC lines, as described above.

For example, to plot the continuum-normalized spectrum for relative order 66 of EXPRES data in python (which has an index of 65 and corresponds to absolute order 160-65=95), one would use:

In IDL:

This code re-creates the bottom, longer plot in the below figure.

Top left of the above figure plots the spectrum, the returned values from the flat-relative optimal extraction. These values are relative to the median flat for each night and so are unitless with no immediate physical interpretation. Top right shows the spectrum multiplied by the blaze, which recovers the raw, Poisson-distributed counts for each pixel. The bottom plot shows the continuum normalized spectrum from dividing the spectrum values by the continuum fit. Note that similar to the spectrum, one should also divide the uncertainty values by the continuum when working in a continuum normalized regime.

All returned arrays have the format (86, 7920) in python, [7920, 86] in IDL. This corresponds to 86 total orders each with 7920 pixels. The entire EXPRES detector is extracted, giving 7920 pixels in each order. However, in practice, the signal drops off sharply at the edge of each order given the nature of the blaze function. Pixels with too little signal to be extracted are replaced with NaNs in the spectrum and uncertainty arrays (shown as black dots in the above figure). They are masked out by the mask returned by the pixel_mask keyword.

Extracting the full detector also results in many low signal, high uncertainty extracted values on the edges of orders. This can result in noisy even negative extracted values, but ones that are properly associated with huge uncertainties. Therefore, in theory, these points should pose no problem with methods that take the uncertainties into account. In practice, it may be easiest to implement an uncertainty cut on extracted values (i.e. mask out all values with a large fractional uncertainty) as well as masking out the NaN values on either edge of an order.

Meta-Data

Meta-data are contained in the primary HDU's header (i.e. index 0). Keywords for some of the most commonly accessed values are:

  • 'AEXPTIME': exposure time of an observation in seconds.
  • 'MIDPOINT': geometric midpoint between the shutter opening and closing in UTC time and 'isot' format.
  • 'AIRMASS': the average airmass at the center of the exposure, with the beginning and ending airmass given by 'AMBEG' and 'AMEND' respectively.
  • 'MOONDIST': distance in degrees from the moon.
  • 'SUNDIST': distance in degrees from the sun.

The photon-weighted midpoint is found in the second extension HDU (i.e. index 2) along with other exposure meter data.

  • 'HIERARCH wtd_mdpt': photon-weighted midpoint time of exposure.
  • 'HIERARCH wtd_single_channel_bc': a single, non-chromatic barycentric velocity value where the velocity is given as a unitless redshift, z=v/c. Note, the bary_wavelength and bary_excalibur keywords in the first extension HDU uses chromatic barycentric corrections.

Line Spread Function (LSF)

The FWHM of an LFC line ranges from 3.9-5 pixels across the detector. More specifically, EXPRES's line spread function can best be represented by either a super-Gaussian or a rectangle (i.e. a top-hat function) convolved with a Gaussian. A super-Gaussian fit tends to behave better because there is less degeneracy at smaller widths. Let a super-Gaussian be defined as:

$$A , exp left(-left(frac{(x-x_0)^2}{2sigma_x^2}right)^Pright)$$

where (A) is the amplitude and (x_0) is the center of the super-Gaussian. The width (sigma_x) varies smoothly across the detector and is on the order of 1.4 - 2.8 pixels. The exponent, (P), ranges between 1 and 2. For more exact PSF parameters across the detector, we are happy to share LFC data for fitting or more specifics about the PSF parameters upon request.

Cross-Correlation Function (CCF)

The CCF for each observation is contained in a FITS File with 3 HDUs. The primary HDU (i.e. index 0) has only a header with relevant meta data (e.g. final calculated velocity and error, other information about the observation). The first extension HDU (i.e. index 1) contains the combined CCF while the second extension (i.e. index 2) contains the order-by-order CCFs.

With the second data release, CCFs have generously been provided by the Penn State team and optimized through work done by Eric Ford, Alex Wise, Marina Lafarga Magro, and Heather Cegla. A standard version of the CCFs that uses ESPRESSO masks, uniform line weights per target, and a CCF mask width tuned to the LSF of EXPRES is distributed. If teams are interested in different variants of the CCF, please contact Eric Ford (eford[at]psu[dot]edu).

The first extension HDU (i.e. index 1) contains the velocity grid on which the CCF is evaluated under the keyword 'V_grid' in units of cm/s and the combined CCF for the observation under the keyword 'ccf', with errors under 'e_ccf'. The overall CCF only combines a subset of orders (relative orders 42-71, which corresponds to absolute orders 89-118), which are orders with high signal and excalibur wavelengths.

The second extension HDU (i.e. index 2) contains the CCFs for each individual order. Each order is evaluated on the same velocity grid as given in the first extension HDU. The CCF values are given under the keyword 'ccfs' with the errors under 'errs'. The velocity and associated error from each order is also given under 'v' and 'e_v' respectively. Note that orders with no signal will have all zeros in place of a ccf.

Radial Velocities

Radial velocities, along with their errors and barycentric midpoint times, are collected in a CSV table for each target along with activity indicators (see below). This same information is also provided in a DACE compatible file. For each observation, two RV measurements are given. CCF RVs are derived using the classic cross-correlation function method of finding RVs. The ESPRESSO mask and line weighting specific to each target star's spectral type is used. For the window function, we use a half cosine.

CBC RVs refer to RVs found using the chunk-by-chunk (CBC) method. Each observation is split up into ~2 A chunks. An RV for each chunk is found by shifting a template spectrum to match the observed spectrum. RVs for each chunk are weighted by how well behaved that chunk is over time and combined to recover an RV for each exposure. The RVs included in the provided data tables were found using 42, 140-pixel-wide chunks in relative orders 43-75 (i.e. index 42-74, absolute orders 118 to 86) ranging from pixel 770 to 6650 across each order. The Excalibur wavelengths were used.

EXPRES data are divided into separate instrument epochs, which demarcate when hardware changes/fixes were implemented. Only data from epochs 4 and 5 are given, which represent post-commissioning EXPRES. Epoch 4 started on Feb. 7, 2019 and continued until Aug. 04, 2019. Epoch 5 started on Aug. 04, 2019 and is the current epoch. Epoch information is included in the radial velocity/activity indicator table. For more information, see Petersburg + 2020, section 3 [9].

Between epoch 4 and epoch 5, the only adjustments were made to the LFC to make it more stable. As a result, the LFC covers a smaller, redder wavelength range in epoch 5 than it did in epoch 4; the LFC ranges from ~4775-7225 A for epoch 4 and ~4940-7270 A for epoch 5. We expect this to have a small effect on the wavelength precision for these orders and changes the range of available Excalibur wavelengths. The changes between epoch 4 and epoch 5 should only affect the wavelength solution, but participants can consider treating the data from different epochs independently.

Activity Indicators

Activity indicators are provided in a table that includes all activity indicators for all observations of a given target along with radial velocity information. They will also be in the first extension HDU's header of each FITS file. Note: missing activity indicator values are replaced by NaNs.

Unless otherwise described, cited errors for each activity indictor is given as the spread in the indicator's values for seven quiet stars for a total of ~400 observations. A histogram of each indicator's values for every observation of those supposed quiet stars was fit to a Gaussian. The width of that Gaussian is reported here as the error.

The S index is given in the FITS headers under the keyword 'S-VALUE'. This gives the ratio of the Ca II H line core emission to the Ca II K line core emission calibrated to be consistent with the Mt. Wilson Observatory catalog [1]. To build SNR, all exposures taken in a night are combined to find a nightly S-value, which is reported for all observations taken that night. For a number of quiet stars, the spread in the returned S index is 6.611E-3.

The H-alpha core emission is given in the FITS header under the keyword 'HALPHA'. This represents the ratio of the H-alpha core emission relative to the continuum. The core emission is found as the minimum of the H-alpha line, which has been over-sampled using a cubic spline. The continuum is given as the medium of a nearby, line-less region. The approximate error in the H-alpha core emission value is 1.879E-2 with no real units.

The H-alpha equivalent width is given in the FITS header under the keyword 'HWIDTH'. This value gives the equivalent width of the H-alpha absorption line. The approximate error for the H-alpha equivalent width is 0.0142 A.

The CCF FWHM (i.e. cross correlation function's full width at half maximum) is given in the FITS header under the keyword 'CCFFWHM' (yes, that's two F's in the middle). This value gives the width of the CCF as determined by fitting a Gaussian to the CCF. The error from the covariance matrix of this fit is given in the FITS header under the keyword 'CCFFWHME'. Empirically, however, the spread in CCF FWHM values for a number of quiet stars is 5.316 m/s.

We determine the bisector of the CCF by finding the midpoint between the left and right wing of the CCF at 100, equally-spaced points ranging from the bottom to the top of the CCF. The 'BIS' keyword gives the difference between the mean of the top (60-90 percentile) and the bottom (10-40 percentile) of the CCF bisector (as defined in Queloz+ 2001 [3]). The approximate error of this BIS measurement is 2.515 m/s.

The velocity span of the CCF is defined as the difference in mean of a Gaussian fit to the top of the CCF vs. the bottom of the CCF. Top and bottom points are defined as greater than one sigma away from the center or within one sigma of the center respectively. This value is given in the FITS header under the keyword 'CCFVSPAN'. The approximate error of this velocity span measurement is 2.297 m/s.

Asymmetry in the CCF is also probed using a bi-Gaussian fit. A bi-Gaussian incorporates an asymmetry parameter in the fit, which can be used as an activity indicator. This value is given in the FITS header under the keyword 'BIGAUSS'. The approximate error of this bi-Gaussian measurement is 0.168 with no units.

The skew normal similarly gives a measure of asymmetry in the CCF by incorporating a Gaussian model with an inherent asymmetry parameter. This asymmetry value is given in the FITS header under the keyword 'SKEWNORMAL'. The approximate error of this skew normal measurement is 0.0305 with no units.

APT Photometry

Photometry is from Fairborn Observatory's 0.74-m Automatic Photoelectric Telescopes (APT) in southern Arizona [2]. The APT is equipped with a single-channel photometer that uses an EMI 9124QB bi-alkali photomultiplier tube to measure the difference in brightness between a program star and three nearby comparison stars in the Strömgren b and y passbands.

Photometry is provided in CSV files with three columns. The first column, 'Time [MJD]', gives the time of each exposure. The second column, 'V [mag]', gives relative magnitudes in the V-band. The last column, 'Trend [mag]', represents a model of long-term trends present in the light curve obtained by smoothing the light curve over 100 days [12]. This data are accompanied by a table summarizing the results from individual observing seasons.

Requested Acknowledgements

We are eager to have people from different groups, different countries, and different backgrounds participate. Please let us know (e-mail debra.fischer@yale.edu) if you plan to publish results using the data made available through this project. We kindly request papers using this data include the below statement in the acknowledgments and cite the following papers. If DACE is used for data analysis, please also include the DACE-specific acknowledgement statement.

EXPRES Data Acknowledgment Statement

'These results made use of data provided by the Yale-SFSU-Lowell EXPRES team using the EXtreme PREcision Spectrograph at the Lowell Discovery telescope, Lowell Observatory. EXPRES was designed and built at Yale with financial support from NSF MRI-1429365, NSF ATI-1509436, NASA XRP 80NSSC18K0443, NSF AST-1616086, the Heising-Simons Foundation, and an anonymous donor in the Yale community.'

Citations

This BibTeX file contains the citations for all articles listed below.

  1. Jurgenson+ 2016 [export citation]: the original instrument paper for EXPRES
  2. Levine+ 2018 [export citation]: paper describing the status and performance of the Lowell Discovery Telescope
  3. Blackman+ 2020 [export citation]: paper assessing instrument performance of EXPRES
  4. Petersburg+ 2020 [export citation]: paper describing the extraction pipeline of EXPRES
  5. Brewer+ 2020 [export citation]: paper demonstrating EXPRES on-sky precision
  6. Zhao+ 2020 [export citation]: research note describing this project, to be cited before the publication of the final report.

DACE Acknowledgment Statement

'This publication makes use of The Data & Analysis Center for Exoplanets (DACE), which is a facility based at the University of Geneva (CH) dedicated to extrasolar planets data visualisation, exchange and analysis. DACE is a platform of the Swiss National Centre of Competence in Research (NCCR) PlanetS, federating the Swiss expertise in Exoplanet research. The DACE platform is available at https://dace.unige.ch.”

Epoch Ravine (alpha) Mac Os Catalina

Related Papers

Epoch

Epoch Ravine (alpha) Mac Os 11

  1. Zhao, L.L.; Fischer, D.A.; Ford, E.B.; Henry, G.W.; Rottenbacher, R.M.; Brewer, J.M. 2020 'The EXPRES Stellar-Signals Project I. Data Release', RNAAS, 4, 156Z
  2. Zhao, L.L.; Hogg, D.W.; Bedell, M.; Fischer, D.A. 2020 'Excalibur: A Non-Parametric, Hierarchical Wavelength-Calibration Model for Precision Spectrographs', AJ, 161, 2
  3. Cabot, S.H.C.; Rottenbacher, R.M.; Henry, G.W.; Zhao, L.L.; Harmon, R.O.; Fischer, D.A.; Brewer, J.M.; Llama, J.; Petersburg, R.R.; Szymkowiak, A.E. 2020 'EXPRES. II. Searching for Planets Around Active Stars: A Case Study of HD~101501', AJ, 161, 1
  4. Brewer, J.M.; Fischer, D.A.; Blackman, R.T.; Cabot, S.H.C.; Davis, A.B.; Laughlin, G.; Leet, C.; Ong, J.M.; Petersburg, R.R.; Szymkowiak, A.E.; Zhao, L.L.; Henry, G.W.; Llama, J. 2020 'EXPRES 1. HD 3651 an Ideal RV Benchmark', AJ, 160, 67
  5. Blackman, R.T. (and 27 co-authors) 2020 'Performance Verification of the EXtreme PREcision Spectrograph', AJ, 159, 238
  6. Petersburg, R.R.; Ong, J.M.; Zhao, L.L.; Blackman R.T.; Brewer, J.M.; Buchhave, L.A.; Cabot, S.H.C.; Davis, A.B.; Jurgenson, C.A.; Leet, C.; McCracken, T.M.; Sawyer, D.; Sharov, M.; Tronsgaard, R.; Symkowiak, A.E.; Fischer, D.A. 2020 'An Extreme Precision Radial Velocity Pipeline: First Radial Velocities from EXPRES', AJ, 159, 187
  7. Blackman, R.T.; Ong, J.M. Joel; Fischer, D.A. 2019 'The Measured Impact of Chromatic Atmospheric Effects on Barycentric Corrections: Results from the EXtreme PREcision Spectrograph', AJ, 158, 40
  8. Leet, C.; Fischer, D.A.; Valenti, J.A. 2019 'Towards a Self-Calibrating, Empirical, Light-Weight Model for Tellurics in High-Resolution Spectra', AJ, 157, 187
  9. Levine, S.E.; DeGroff, W.T.; Bida, T.A.; Dunham, E.W.; Jacoby, G.H. 2018 'Status and performance of Lowell Observatory's Discovery Channel telescope and its growing suite of instruments', SPIE, 10700, 4PL
  10. Blackman, R.T.; Szymkowiak, A.E.; Fischer, D.A.; Jurgenson, C.A. 2017 'Accounting for Chromatic Atmospheric Effects on Barycentric Corrections', ApJ, 837, 18
  11. Jurgenson, C.; Fischer, D.; McCracken, T.; Sawyer, D.; Szymkowiak, A.; Davis, A.; Muller, G.; Santoro, F. 2016 'EXPRES: a next generation RV spectrograph in the search for earth-like worlds', SPIE, 9908, 6TJ
  12. Queloz, D.; Henry, G.W.; Sivan, J.P.; Baliunas, S.L.; Beuzit, J.L.; Donahue, R.A.; Mayor, M.; Naef, D.; Perrier, C.; Udry, S. 2001 'No Planet for HD 166435', A&A, 379, 279
  13. Henry, G.W. 1999 'Techniques for Automated High-Precision Photometry of Sun-like Stars', PASP, 111, 845H
  14. Duncan, D.K. (and 17 co-authors) 1991 'Ca II H and K Measurements Made at Mount Wilson Observatory, 1966--1983', ApJS, 76, 383