The eddy-resolving Ocean Forecasting Australia Model (OFAM) is used to downscale future climate projections by CSIRO Mk3.5 climate model under scenario A1B for the 2060s. A simulation run without relaxation and another run with relaxation to expected sea surface temperature and sea surface salinity are archived with 3D fields of ocean temperature, salinity, currents, and sea surface height.
The eddy-resolving Ocean Forecasting Australia Model (OFAM) is used to downscale future climate projections by CSIRO Mk3.5 climate model under scenario A1B for the 2060s.
OFAM is based on version 4.0d of the Modular Ocean Model, using a hybrid mixed layer model. The horizontal grid has 1191 and 968 points in the zonal and meridional directions respectively; with 1/10 degree horizontal resolution around Australia (90-180E, south of 17). Outside of this domain, the horizontal resolution decreases to 0.9 degrees across the Pacific and Indian basins (to 10E, 60W and 40N) and to 2 degrees in the Atlantic Ocean. OFAM has 47 vertical levels, with 10 m resolution down to 200 m depth. The topography for OFAM is a composite of topography sources including dbdb2 and GEBCO. Horizontal diffusion is zero. Horizontal viscosity is resolution and state-dependent according to the Smagorinsky viscosity scheme (Griffies and Hallberg 2000). BRAN is a multi-year integration of OFAM that assimilates observations using an ensemble optimal interpolation (EnOI) scheme that uses a stationary ensemble of intraseasonal model anomalies, obtained from a non-assimilating model run. Observations include along-track SLA (atSLA) from altimeters and tide gauges, in situ T and S observations and satellite SST.
Simrad EA500 and EK500 vessel mounted vertical acoustic data consisting of 12kHz, 38kHz, 120kHz, collected on voyage SS 07/2004 of Southern Surveyor off the west coast of Tasmania in July-August 2004, as part of a project to test and develop the blue grenadier acoustic observation system. Data were collected for the duration of the voyage. 38kHz and 120kHz frequencies were calibrated at the start of the voyage.
This dataset contains temperature data from the East Indian Ocean. Data (including available XBT data) were collected since 1778. They have been subjected to quality control as an activity of CSIRO and BoM.
This dataset contains temperature data from the Tasman Sea. Data (including available XBT data) were collected since 1778. They have been subjected to quality control as an activity of CSIRO and BoM.
INSTANT: A New International Array to Measure the Indonesian Throughflow. The INSTANT field program (International Nusantara Stratification And Transport) began in August 2003 and consists of a 3-year deployment of an array of moorings and coastal pressure gauges that will directly measure sea level and full depth in situ velocity, temperature, and salinity of the ITF. For the first time, simultaneous, multipassage, multiyear measurements will be available, and allow the magnitude and properties of the interocean transport between the Pacific and Indian Oceans to be unambiguously known. The array will also provide an unprecedented data set revealing how this complex and fascinating region responds to local and remote forcing at many timescales never before well resolved. Moorings were located at the following locations: (115 45.48, 8 26.77) (115 53.77, 8 24.56) (122 58.36, 11 31.76) (122 57.40, 11 22.19) (122 51.5, 11 16.6) (122 46.8, 11 9.67) (125 32.26, 8 32.33) (125 2.26, 8 24.04)
The primary basis for the project was the analysis of existing plankton and larval fish samples and the collation of data sets on larval distribution that had been derived from sampling across broad areas of southern and eastern Australia over the last 17 years. Some of these samples had been archived in the CSIRO Ian Munro Fish Collection, Australian Museum or South Australian Museum as part of the FRDC funded regional larval fish archive (FRDC94/55). Other samples or data sets were resident within the collections of collaborating institutions. The project focused its analyses on southern and southeast Australia spanning the area from the Great Australian Bight (GAB) to northern NSW. This region was selected for four reasons: First, sampling had been most intensive in this region and available data sets provided excellent spatial and seasonal coverage. Second, our ability to identify larvae to species was well developed in the region. Third, the oceanography of the region had been the subject of intensive study and provided a sound basis for linking biological data to physical processes. Fourth, additional sampling during the period of this project was scheduled that further enhanced sample coverage (specifically sampling by MAFRI in Bass Strait Bass Strait and sampling by CSIRO in the GAB). The Larval Fish Database (LFD) has been created in Microsoft Access. It has been divided into two parts: a data module that houses raw data and an application module that automatically displays summaries of these data in a user-friendly fashion. By dividing the database into two parts, the user only has access to the specified data summaries, the raw data remain secure and the LFD can be updated as further data become available. The LFD incorporates an ActiveX component (MapInfo MapX) that allows the user to visualise spatial data and animations of modelled larval dispersal that are displayed using Microsoft's Media Player. The LFD has been designed to allow expa
The QuOTA project involved NOAA-IPRC and CMAR jointly undertaking to build a very high quality ocean thermal data archive by applying methods and expertise developed through the NOAA-IPRC/CMAR IOTA (Indian Ocean Thermal Archive) collaboration which was established in 1998. The Quota Project resulted in building a high quality upper ocean temperature dataset for the Indian Ocean and the South-western Pacific (east of the dateline). QuOTA contains ocean temperature data collected since 1778 and includes XBT, CT, CU, CTD, XCDT, MBT, BT, BA, DT, SST, TE, UO, bottle, drifting and moored bouy data. Quality control of the data is done by automated processes, followed by 'hand-QC' of data that fails the automated test. This results in a data set containing very little 'bad' data and any that remains is usually subtly faulty, having little impact on most analyses.