Tags


Click a tag to remove it from package

Edit Species Groups of Package

Edit Parameter of Package

Edit DOI Package

Choose a project for this package

FRED
  • Contact
  • GDPR policy
  • Imprint
  • About
  • Sign Up
  • Login
  • SEARCH
  • Search and find
  • Packages
  • Map
  • By Category ...
    • Study sites
    • Sampling locations
    • Parameters
    • Sampling types
    • Species groups
    • Current DOIs

1021 Oceanolab (Océanopolis) - Microcosm experiments using manipulated natural communities of phytoplankton (Bay of Brest)

Title
Oceanolab (Océanopolis) - Microcosm experiments using manipulated natural communities of phytoplankton (Bay of Brest)
Period
2025-04-09 till 2025-11-30
Period length
7 months 21 days
Sampling interval
8 mons Irregular Interval
Species Groups
Phytoplankton, Zooplankton
Study site
brest
Sampling types
Phytoplankton, Zooplankton, surface water
Sampling locations
Océanopolis - Océanolab
location
48.39040108441057, -4.4352149963378915
location
type
indoor mesocosm platform
state
code
description

Website:

https://www.oceanopolis.com/atelier-educatif-oceanolab/#

Contact
Philippe Pondaven
Licence for data
All rights reserved. Please send a request to Philippe Pondaven if you like to use this data. Mind our data policy: IGB Data Policy
Project
BLIC-Blooms Like It Colorful

Machine Readable Metadata Files

FRED provides all metadata of this package in a maschine readable format. There is a pure XML file and one EML file in Ecological Metadata Language. Both files are published under the free CC BY 4.0 Licence.

  • Oceanolab_(Océanopolis)_-_Microcosm_experiments_using_manipulated_natural_communities_of_phytoplankton_(Bay_of_Brest).xml
  • Oceanolab_(Océanopolis)_-_Microcosm_experiments_using_manipulated_natural_communities_of_phytoplankton_(Bay_of_Brest).eml

You are about to leaving FRED and visting a third party website. We are not responsible for the content or availability of linked sites.

To remain on our site, click Cancel.

Parsing data File

Estimated Time:

Why does it take so much time?

While parsing a file, the database has to perform various tasks, some of them needs a lot of CPU and memory for larger files.

  • preprocessing: means automatic detection of headlines, table body, format values or csv-separators
  • copying: means read the file cell by cell and copy all elements to the database. During this format settings can be calculated (for example iso-time)
  • analyzing: check out for different data types (can be time, numeric or text)