Layers
layers
PositionalEncoding(d_model: int, dropout: float = 0.1, max_len: int = 5000)
Bases: Module
Standard sinusoidal positional encoding.
dropout = nn.Dropout(p=dropout)
instance-attribute
forward(x: Float[Tensor, 'token batch embedding']) -> Float[Tensor, 'token batch embedding']
Positional encoding forward pass.
| PARAMETER | DESCRIPTION |
|---|---|
x
|
Tensor, shape
TYPE:
|
MultiScalePeakEmbedding(h_size: int, dropout: float = 0, float_dtype: torch.dtype | str = torch.float64)
Bases: Module
Multi-scale sinusoidal embedding based on Voronov et. al.
h_size = h_size
instance-attribute
float_dtype = getattr(torch, float_dtype, None) if isinstance(float_dtype, str) else float_dtype
instance-attribute
mlp = nn.Sequential(nn.Linear(h_size, h_size), nn.ReLU(), nn.Dropout(dropout), nn.Linear(h_size, h_size), nn.Dropout(dropout))
instance-attribute
head = nn.Sequential(nn.Linear(h_size + 1, h_size), nn.ReLU(), nn.Dropout(dropout), nn.Linear(h_size, h_size), nn.Dropout(dropout))
instance-attribute
forward(spectra: Float[Spectrum, ' batch']) -> Float[SpectrumEmbedding, ' batch']
Encode peaks.
encode_mass(x: Float[Tensor, ' batch']) -> Float[Tensor, 'batch embedding']
Encode mz.
ConvPeakEmbedding(h_size: int, dropout: float = 0)
Bases: Module
Convolutional peak embedding.
h_size = h_size
instance-attribute
conv = nn.Sequential(nn.Conv1d(1, h_size // 4, kernel_size=40000, stride=100, padding=(40000 // 2 - 1)), nn.ReLU(), nn.Dropout(), nn.Conv1d(h_size // 4, h_size, kernel_size=5, stride=1, padding=1), nn.ReLU(), nn.Dropout())
instance-attribute
forward(x: Tensor) -> Tensor
Conv peak embedding.