HyLFM-Net-stat

cover image

HyLFM-Net trained on static images of arrested medaka hatchling hearts. The network reconstructs a volumentric image from a given light-field.

Table of Contents

Model Details

Model Description

Model Sources

Uses

Direct Use

This model is compatible with the bioimageio.spec Python package (version >= 0.5.7.1) and the bioimageio.core Python package supporting model inference in Python code or via the bioimageio CLI.

from bioimageio.core import predict

output_sample = predict(
    "huggingface/thefynnbe/ambitious-sloth/1.3",
    inputs={'lf': '<path or tensor>'},
)

output_tensor = output_sample.members["prediction"]
xarray_dataarray = output_tensor.data
numpy_ndarray = output_tensor.data.to_numpy()

Downstream Use

Specific bioimage.io partner tool compatibilities may be reported at Compatibility Reports. Training (and fine-tuning) code may be available at https://github.com/kreshuklab/hylfm-net.

Out-of-Scope Use

missing; therefore these typical limitations should be considered:

  • Likely not suitable for diagnostic purposes.
  • Likely not validated for different imaging modalities than present in the training data.
  • Should not be used without proper validation on user's specific datasets.

Bias, Risks, and Limitations

In general bioimage models may suffer from biases caused by:

  • Imaging protocol dependencies
  • Use of a specific cell type
  • Species-specific training data limitations

Common risks in bioimage analysis include:

  • Erroneously assuming generalization to unseen experimental conditions
  • Trusting (overconfident) model outputs without validation
  • Misinterpretation of results

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

How to Get Started with the Model

You can use "huggingface/thefynnbe/ambitious-sloth/1.3" as the resource identifier to load this model directly from the Hugging Face Hub using bioimageio.spec or bioimageio.core.

See bioimageio.core documentation: Get started for instructions on how to load and run this model using the bioimageio.core Python package or the bioimageio CLI.

Training Details

Training Data

This model was trained on 10.5281/zenodo.7612115.

Training Procedure

Training Hyperparameters

  • Framework: Pytorch State Dict

Speeds, Sizes, Times

  • Model size: 234.44 MB

Environmental Impact

  • Hardware Type: GTX 2080 Ti
  • Hours used: 10.0
  • Cloud Provider: EMBL Heidelberg
  • Compute Region: Germany
  • Carbon Emitted: 0.54 kg CO2e

Technical Specifications

Model Architecture and Objective

  • Architecture: HyLFM-Net --- A convolutional neural network for light-field microscopy volume reconstruction.

  • Input specifications: lf:

    • Axes: batch, channel, y, x
    • Shape: 1 × 1 × 1235 × 1425
    • Data type: float32
    • Value unit: arbitrary unit
    • Value scale factor: 1.0
    • example lf sample
  • Output specifications: prediction: predicted volume of fluorescence signal

    • Axes: batch, channel, z, y, x
    • Shape: 1 × 1 × 49 × 244 × 284
    • Data type: float32
    • Value unit: arbitrary unit
    • Value scale factor: 1.0
    • example prediction sample](images/output_prediction_sample.png)

Compute Infrastructure

Hardware Requirements

  • Storage: Model size: 234.44 MB

Software

  • Framework:
    • ONNX: opset version: 15
    • Pytorch State Dict: 1.13
    • TorchScript: 1.13
  • Libraries: None beyond the respective framework library.
  • BioImage.IO partner compatibility: Compatibility Reports

This model card was created using the template of the bioimageio.spec Python Package, which intern is based on the BioImage Model Zoo template, incorporating best practices from the Hugging Face Model Card Template. For more information on contributing models, visit bioimage.io.


References:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for thefynnbe/ambitious-sloth