Skip to content

modeles

This module defines models and solvers for 4D-VarNet.

4D-VarNet is a framework for solving inverse problems in data assimilation using deep learning and PyTorch Lightning.

Classes:

Name Description
Lit4dVarNet

A PyTorch Lightning module for training and testing 4D-VarNet models.

GradSolver

A gradient-based solver for optimization in 4D-VarNet.

ConvLstmGradModel

A convolutional LSTM model for gradient modulation.

BaseObsCost

A base class for observation cost computation.

BilinAEPriorCost

A prior cost model using bilinear autoencoders.

BaseObsCost

Bases: Module

A base class for computing observation cost.

Attributes:

Name Type Description
w float

Weight for the observation cost.

Source code in ocean4dvarnet/models.py
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
class BaseObsCost(nn.Module):
    """
    A base class for computing observation cost.

    Attributes:
        w (float): Weight for the observation cost.
    """

    def __init__(self, w=1) -> None:
        """
        Initialize the BaseObsCost module.

        Args:
            w (float, optional): Weight for the observation cost. Defaults to 1.
        """
        super().__init__()
        self.w = w

    def forward(self, state, batch):
        """
        Compute the observation cost.

        Args:
            state (torch.Tensor): The current state tensor.
            batch (dict): The input batch containing data.

        Returns:
            torch.Tensor: The computed observation cost.
        """
        msk = batch.input.isfinite()
        return self.w * F.mse_loss(state[msk], batch.input.nan_to_num()[msk])

__init__(w=1)

Initialize the BaseObsCost module.

Parameters:

Name Type Description Default
w float

Weight for the observation cost. Defaults to 1.

1
Source code in ocean4dvarnet/models.py
448
449
450
451
452
453
454
455
456
def __init__(self, w=1) -> None:
    """
    Initialize the BaseObsCost module.

    Args:
        w (float, optional): Weight for the observation cost. Defaults to 1.
    """
    super().__init__()
    self.w = w

forward(state, batch)

Compute the observation cost.

Parameters:

Name Type Description Default
state Tensor

The current state tensor.

required
batch dict

The input batch containing data.

required

Returns:

Type Description

torch.Tensor: The computed observation cost.

Source code in ocean4dvarnet/models.py
458
459
460
461
462
463
464
465
466
467
468
469
470
def forward(self, state, batch):
    """
    Compute the observation cost.

    Args:
        state (torch.Tensor): The current state tensor.
        batch (dict): The input batch containing data.

    Returns:
        torch.Tensor: The computed observation cost.
    """
    msk = batch.input.isfinite()
    return self.w * F.mse_loss(state[msk], batch.input.nan_to_num()[msk])

BilinAEPriorCost

Bases: Module

A prior cost model using bilinear autoencoders.

Attributes:

Name Type Description
bilin_quad bool

Whether to use bilinear quadratic terms.

conv_in Conv2d

Convolutional layer for input.

conv_hidden Conv2d

Convolutional layer for hidden states.

bilin_1 Conv2d

Bilinear layer 1.

bilin_21 Conv2d

Bilinear layer 2 (part 1).

bilin_22 Conv2d

Bilinear layer 2 (part 2).

conv_out Conv2d

Convolutional layer for output.

down Module

Downsampling layer.

up Module

Upsampling layer.

Source code in ocean4dvarnet/models.py
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
class BilinAEPriorCost(nn.Module):
    """
    A prior cost model using bilinear autoencoders.

    Attributes:
        bilin_quad (bool): Whether to use bilinear quadratic terms.
        conv_in (nn.Conv2d): Convolutional layer for input.
        conv_hidden (nn.Conv2d): Convolutional layer for hidden states.
        bilin_1 (nn.Conv2d): Bilinear layer 1.
        bilin_21 (nn.Conv2d): Bilinear layer 2 (part 1).
        bilin_22 (nn.Conv2d): Bilinear layer 2 (part 2).
        conv_out (nn.Conv2d): Convolutional layer for output.
        down (nn.Module): Downsampling layer.
        up (nn.Module): Upsampling layer.
    """

    def __init__(self, dim_in, dim_hidden, kernel_size=3, downsamp=None, bilin_quad=True):
        """
        Initialize the BilinAEPriorCost module.

        Args:
            dim_in (int): Number of input dimensions.
            dim_hidden (int): Number of hidden dimensions.
            kernel_size (int, optional): Kernel size for convolutions. Defaults to 3.
            downsamp (int, optional): Downsampling factor. Defaults to None.
            bilin_quad (bool, optional): Whether to use bilinear quadratic terms. Defaults to True.
        """
        super().__init__()
        self.bilin_quad = bilin_quad
        self.conv_in = nn.Conv2d(
            dim_in, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
        )
        self.conv_hidden = nn.Conv2d(
            dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
        )

        self.bilin_1 = nn.Conv2d(
            dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
        )
        self.bilin_21 = nn.Conv2d(
            dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
        )
        self.bilin_22 = nn.Conv2d(
            dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
        )

        self.conv_out = nn.Conv2d(
            2 * dim_hidden, dim_in, kernel_size=kernel_size, padding=kernel_size // 2
        )

        self.down = nn.AvgPool2d(downsamp) if downsamp is not None else nn.Identity()
        self.up = (
            nn.UpsamplingBilinear2d(scale_factor=downsamp)
            if downsamp is not None
            else nn.Identity()
        )

    def forward_ae(self, x):
        """
        Perform the forward pass through the autoencoder.

        Args:
            x (torch.Tensor): Input tensor.

        Returns:
            torch.Tensor: Output tensor after passing through the autoencoder.
        """
        x = self.down(x)
        x = self.conv_in(x)
        x = self.conv_hidden(F.relu(x))

        nonlin = (
            self.bilin_21(x)**2
            if self.bilin_quad
            else (self.bilin_21(x) * self.bilin_22(x))
        )
        x = self.conv_out(
            torch.cat([self.bilin_1(x), nonlin], dim=1)
        )
        x = self.up(x)
        return x

    def forward(self, state):
        """
        Compute the prior cost using the autoencoder.

        Args:
            state (torch.Tensor): The current state tensor.

        Returns:
            torch.Tensor: The computed prior cost.
        """
        return F.mse_loss(state, self.forward_ae(state))

__init__(dim_in, dim_hidden, kernel_size=3, downsamp=None, bilin_quad=True)

Initialize the BilinAEPriorCost module.

Parameters:

Name Type Description Default
dim_in int

Number of input dimensions.

required
dim_hidden int

Number of hidden dimensions.

required
kernel_size int

Kernel size for convolutions. Defaults to 3.

3
downsamp int

Downsampling factor. Defaults to None.

None
bilin_quad bool

Whether to use bilinear quadratic terms. Defaults to True.

True
Source code in ocean4dvarnet/models.py
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
def __init__(self, dim_in, dim_hidden, kernel_size=3, downsamp=None, bilin_quad=True):
    """
    Initialize the BilinAEPriorCost module.

    Args:
        dim_in (int): Number of input dimensions.
        dim_hidden (int): Number of hidden dimensions.
        kernel_size (int, optional): Kernel size for convolutions. Defaults to 3.
        downsamp (int, optional): Downsampling factor. Defaults to None.
        bilin_quad (bool, optional): Whether to use bilinear quadratic terms. Defaults to True.
    """
    super().__init__()
    self.bilin_quad = bilin_quad
    self.conv_in = nn.Conv2d(
        dim_in, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
    )
    self.conv_hidden = nn.Conv2d(
        dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
    )

    self.bilin_1 = nn.Conv2d(
        dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
    )
    self.bilin_21 = nn.Conv2d(
        dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
    )
    self.bilin_22 = nn.Conv2d(
        dim_hidden, dim_hidden, kernel_size=kernel_size, padding=kernel_size // 2
    )

    self.conv_out = nn.Conv2d(
        2 * dim_hidden, dim_in, kernel_size=kernel_size, padding=kernel_size // 2
    )

    self.down = nn.AvgPool2d(downsamp) if downsamp is not None else nn.Identity()
    self.up = (
        nn.UpsamplingBilinear2d(scale_factor=downsamp)
        if downsamp is not None
        else nn.Identity()
    )

forward(state)

Compute the prior cost using the autoencoder.

Parameters:

Name Type Description Default
state Tensor

The current state tensor.

required

Returns:

Type Description

torch.Tensor: The computed prior cost.

Source code in ocean4dvarnet/models.py
555
556
557
558
559
560
561
562
563
564
565
def forward(self, state):
    """
    Compute the prior cost using the autoencoder.

    Args:
        state (torch.Tensor): The current state tensor.

    Returns:
        torch.Tensor: The computed prior cost.
    """
    return F.mse_loss(state, self.forward_ae(state))

forward_ae(x)

Perform the forward pass through the autoencoder.

Parameters:

Name Type Description Default
x Tensor

Input tensor.

required

Returns:

Type Description

torch.Tensor: Output tensor after passing through the autoencoder.

Source code in ocean4dvarnet/models.py
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
def forward_ae(self, x):
    """
    Perform the forward pass through the autoencoder.

    Args:
        x (torch.Tensor): Input tensor.

    Returns:
        torch.Tensor: Output tensor after passing through the autoencoder.
    """
    x = self.down(x)
    x = self.conv_in(x)
    x = self.conv_hidden(F.relu(x))

    nonlin = (
        self.bilin_21(x)**2
        if self.bilin_quad
        else (self.bilin_21(x) * self.bilin_22(x))
    )
    x = self.conv_out(
        torch.cat([self.bilin_1(x), nonlin], dim=1)
    )
    x = self.up(x)
    return x

ConvLstmGradModel

Bases: Module

A convolutional LSTM model for gradient modulation.

Attributes:

Name Type Description
dim_hidden int

Number of hidden dimensions.

gates Conv2d

Convolutional gates for LSTM.

conv_out Conv2d

Output convolutional layer.

dropout Dropout

Dropout layer.

down Module

Downsampling layer.

up Module

Upsampling layer.

Source code in ocean4dvarnet/models.py
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
class ConvLstmGradModel(nn.Module):
    """
    A convolutional LSTM model for gradient modulation.

    Attributes:
        dim_hidden (int): Number of hidden dimensions.
        gates (nn.Conv2d): Convolutional gates for LSTM.
        conv_out (nn.Conv2d): Output convolutional layer.
        dropout (nn.Dropout): Dropout layer.
        down (nn.Module): Downsampling layer.
        up (nn.Module): Upsampling layer.
    """

    def __init__(self, dim_in, dim_hidden, kernel_size=3, dropout=0.1, downsamp=None):
        """
        Initialize the ConvLstmGradModel.

        Args:
            dim_in (int): Number of input dimensions.
            dim_hidden (int): Number of hidden dimensions.
            kernel_size (int, optional): Kernel size for convolutions. Defaults to 3.
            dropout (float, optional): Dropout rate. Defaults to 0.1.
            downsamp (int, optional): Downsampling factor. Defaults to None.
        """
        super().__init__()
        self.dim_hidden = dim_hidden
        self.gates = torch.nn.Conv2d(
            dim_in + dim_hidden,
            4 * dim_hidden,
            kernel_size=kernel_size,
            padding=kernel_size // 2,
        )

        self.conv_out = torch.nn.Conv2d(
            dim_hidden, dim_in, kernel_size=kernel_size, padding=kernel_size // 2
        )

        self.dropout = torch.nn.Dropout(dropout)
        self._state = []
        self.down = nn.AvgPool2d(downsamp) if downsamp is not None else nn.Identity()
        self.up = (
            nn.UpsamplingBilinear2d(scale_factor=downsamp)
            if downsamp is not None
            else nn.Identity()
        )

    def reset_state(self, inp):
        """
        Reset the internal state of the LSTM.

        Args:
            inp (torch.Tensor): Input tensor to determine state size.
        """
        size = [inp.shape[0], self.dim_hidden, *inp.shape[-2:]]
        self._grad_norm = None
        self._state = [
            self.down(torch.zeros(size, device=inp.device)),
            self.down(torch.zeros(size, device=inp.device)),
        ]

    def forward(self, x):
        """
        Perform the forward pass of the LSTM.

        Args:
            x (torch.Tensor): Input tensor.

        Returns:
            torch.Tensor: Output tensor.
        """
        if self._grad_norm is None:
            self._grad_norm = (x**2).mean().sqrt()
        x = x / self._grad_norm
        hidden, cell = self._state
        x = self.dropout(x)
        x = self.down(x)
        gates = self.gates(torch.cat((x, hidden), 1))

        in_gate, remember_gate, out_gate, cell_gate = gates.chunk(4, 1)

        in_gate, remember_gate, out_gate = map(
            torch.sigmoid, [in_gate, remember_gate, out_gate]
        )
        cell_gate = torch.tanh(cell_gate)

        cell = (remember_gate * cell) + (in_gate * cell_gate)
        hidden = out_gate * torch.tanh(cell)

        self._state = hidden, cell
        out = self.conv_out(hidden)
        out = self.up(out)
        return out

__init__(dim_in, dim_hidden, kernel_size=3, dropout=0.1, downsamp=None)

Initialize the ConvLstmGradModel.

Parameters:

Name Type Description Default
dim_in int

Number of input dimensions.

required
dim_hidden int

Number of hidden dimensions.

required
kernel_size int

Kernel size for convolutions. Defaults to 3.

3
dropout float

Dropout rate. Defaults to 0.1.

0.1
downsamp int

Downsampling factor. Defaults to None.

None
Source code in ocean4dvarnet/models.py
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
def __init__(self, dim_in, dim_hidden, kernel_size=3, dropout=0.1, downsamp=None):
    """
    Initialize the ConvLstmGradModel.

    Args:
        dim_in (int): Number of input dimensions.
        dim_hidden (int): Number of hidden dimensions.
        kernel_size (int, optional): Kernel size for convolutions. Defaults to 3.
        dropout (float, optional): Dropout rate. Defaults to 0.1.
        downsamp (int, optional): Downsampling factor. Defaults to None.
    """
    super().__init__()
    self.dim_hidden = dim_hidden
    self.gates = torch.nn.Conv2d(
        dim_in + dim_hidden,
        4 * dim_hidden,
        kernel_size=kernel_size,
        padding=kernel_size // 2,
    )

    self.conv_out = torch.nn.Conv2d(
        dim_hidden, dim_in, kernel_size=kernel_size, padding=kernel_size // 2
    )

    self.dropout = torch.nn.Dropout(dropout)
    self._state = []
    self.down = nn.AvgPool2d(downsamp) if downsamp is not None else nn.Identity()
    self.up = (
        nn.UpsamplingBilinear2d(scale_factor=downsamp)
        if downsamp is not None
        else nn.Identity()
    )

forward(x)

Perform the forward pass of the LSTM.

Parameters:

Name Type Description Default
x Tensor

Input tensor.

required

Returns:

Type Description

torch.Tensor: Output tensor.

Source code in ocean4dvarnet/models.py
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
def forward(self, x):
    """
    Perform the forward pass of the LSTM.

    Args:
        x (torch.Tensor): Input tensor.

    Returns:
        torch.Tensor: Output tensor.
    """
    if self._grad_norm is None:
        self._grad_norm = (x**2).mean().sqrt()
    x = x / self._grad_norm
    hidden, cell = self._state
    x = self.dropout(x)
    x = self.down(x)
    gates = self.gates(torch.cat((x, hidden), 1))

    in_gate, remember_gate, out_gate, cell_gate = gates.chunk(4, 1)

    in_gate, remember_gate, out_gate = map(
        torch.sigmoid, [in_gate, remember_gate, out_gate]
    )
    cell_gate = torch.tanh(cell_gate)

    cell = (remember_gate * cell) + (in_gate * cell_gate)
    hidden = out_gate * torch.tanh(cell)

    self._state = hidden, cell
    out = self.conv_out(hidden)
    out = self.up(out)
    return out

reset_state(inp)

Reset the internal state of the LSTM.

Parameters:

Name Type Description Default
inp Tensor

Input tensor to determine state size.

required
Source code in ocean4dvarnet/models.py
393
394
395
396
397
398
399
400
401
402
403
404
405
def reset_state(self, inp):
    """
    Reset the internal state of the LSTM.

    Args:
        inp (torch.Tensor): Input tensor to determine state size.
    """
    size = [inp.shape[0], self.dim_hidden, *inp.shape[-2:]]
    self._grad_norm = None
    self._state = [
        self.down(torch.zeros(size, device=inp.device)),
        self.down(torch.zeros(size, device=inp.device)),
    ]

GradSolver

Bases: Module

A gradient-based solver for optimization in 4D-VarNet.

Attributes:

Name Type Description
prior_cost Module

The prior cost function.

obs_cost Module

The observation cost function.

grad_mod Module

The gradient modulation model.

n_step int

Number of optimization steps.

lr_grad float

Learning rate for gradient updates.

lbd float

Regularization parameter.

Source code in ocean4dvarnet/models.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
class GradSolver(nn.Module):
    """
    A gradient-based solver for optimization in 4D-VarNet.

    Attributes:
        prior_cost (nn.Module): The prior cost function.
        obs_cost (nn.Module): The observation cost function.
        grad_mod (nn.Module): The gradient modulation model.
        n_step (int): Number of optimization steps.
        lr_grad (float): Learning rate for gradient updates.
        lbd (float): Regularization parameter.
    """

    def __init__(self, prior_cost, obs_cost, grad_mod, n_step, lr_grad=0.2, lbd=1.0, **kwargs):
        """
        Initialize the GradSolver.

        Args:
            prior_cost (nn.Module): The prior cost function.
            obs_cost (nn.Module): The observation cost function.
            grad_mod (nn.Module): The gradient modulation model.
            n_step (int): Number of optimization steps.
            lr_grad (float, optional): Learning rate for gradient updates. Defaults to 0.2.
            lbd (float, optional): Regularization parameter. Defaults to 1.0.
        """
        super().__init__()
        self.prior_cost = prior_cost
        self.obs_cost = obs_cost
        self.grad_mod = grad_mod

        self.n_step = n_step
        self.lr_grad = lr_grad
        self.lbd = lbd

        self._grad_norm = None

    def init_state(self, batch, x_init=None):
        """
        Initialize the state for optimization.

        Args:
            batch (dict): Input batch containing data.
            x_init (torch.Tensor, optional): Initial state. Defaults to None.

        Returns:
            torch.Tensor: Initialized state.
        """
        if x_init is not None:
            return x_init

        return batch.input.nan_to_num().detach().requires_grad_(True)

    def solver_step(self, state, batch, step):
        """
        Perform a single optimization step.

        Args:
            state (torch.Tensor): Current state.
            batch (dict): Input batch containing data.
            step (int): Current optimization step.

        Returns:
            torch.Tensor: Updated state.
        """
        var_cost = self.prior_cost(state) + self.lbd**2 * self.obs_cost(state, batch)
        grad = torch.autograd.grad(var_cost, state, create_graph=True)[0]

        gmod = self.grad_mod(grad)
        state_update = (
            1 / (step + 1) * gmod
            + self.lr_grad * (step + 1) / self.n_step * grad
        )

        return state - state_update

    def forward(self, batch):
        """
        Perform the forward pass of the solver.

        Args:
            batch (dict): Input batch containing data.

        Returns:
            torch.Tensor: Final optimized state.
        """
        with torch.set_grad_enabled(True):
            state = self.init_state(batch)
            self.grad_mod.reset_state(batch.input)

            for step in range(self.n_step):
                state = self.solver_step(state, batch, step=step)
                if not self.training:
                    state = state.detach().requires_grad_(True)

            if not self.training:
                state = self.prior_cost.forward_ae(state)
        return state

__init__(prior_cost, obs_cost, grad_mod, n_step, lr_grad=0.2, lbd=1.0, **kwargs)

Initialize the GradSolver.

Parameters:

Name Type Description Default
prior_cost Module

The prior cost function.

required
obs_cost Module

The observation cost function.

required
grad_mod Module

The gradient modulation model.

required
n_step int

Number of optimization steps.

required
lr_grad float

Learning rate for gradient updates. Defaults to 0.2.

0.2
lbd float

Regularization parameter. Defaults to 1.0.

1.0
Source code in ocean4dvarnet/models.py
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
def __init__(self, prior_cost, obs_cost, grad_mod, n_step, lr_grad=0.2, lbd=1.0, **kwargs):
    """
    Initialize the GradSolver.

    Args:
        prior_cost (nn.Module): The prior cost function.
        obs_cost (nn.Module): The observation cost function.
        grad_mod (nn.Module): The gradient modulation model.
        n_step (int): Number of optimization steps.
        lr_grad (float, optional): Learning rate for gradient updates. Defaults to 0.2.
        lbd (float, optional): Regularization parameter. Defaults to 1.0.
    """
    super().__init__()
    self.prior_cost = prior_cost
    self.obs_cost = obs_cost
    self.grad_mod = grad_mod

    self.n_step = n_step
    self.lr_grad = lr_grad
    self.lbd = lbd

    self._grad_norm = None

forward(batch)

Perform the forward pass of the solver.

Parameters:

Name Type Description Default
batch dict

Input batch containing data.

required

Returns:

Type Description

torch.Tensor: Final optimized state.

Source code in ocean4dvarnet/models.py
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
def forward(self, batch):
    """
    Perform the forward pass of the solver.

    Args:
        batch (dict): Input batch containing data.

    Returns:
        torch.Tensor: Final optimized state.
    """
    with torch.set_grad_enabled(True):
        state = self.init_state(batch)
        self.grad_mod.reset_state(batch.input)

        for step in range(self.n_step):
            state = self.solver_step(state, batch, step=step)
            if not self.training:
                state = state.detach().requires_grad_(True)

        if not self.training:
            state = self.prior_cost.forward_ae(state)
    return state

init_state(batch, x_init=None)

Initialize the state for optimization.

Parameters:

Name Type Description Default
batch dict

Input batch containing data.

required
x_init Tensor

Initial state. Defaults to None.

None

Returns:

Type Description

torch.Tensor: Initialized state.

Source code in ocean4dvarnet/models.py
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
def init_state(self, batch, x_init=None):
    """
    Initialize the state for optimization.

    Args:
        batch (dict): Input batch containing data.
        x_init (torch.Tensor, optional): Initial state. Defaults to None.

    Returns:
        torch.Tensor: Initialized state.
    """
    if x_init is not None:
        return x_init

    return batch.input.nan_to_num().detach().requires_grad_(True)

solver_step(state, batch, step)

Perform a single optimization step.

Parameters:

Name Type Description Default
state Tensor

Current state.

required
batch dict

Input batch containing data.

required
step int

Current optimization step.

required

Returns:

Type Description

torch.Tensor: Updated state.

Source code in ocean4dvarnet/models.py
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
def solver_step(self, state, batch, step):
    """
    Perform a single optimization step.

    Args:
        state (torch.Tensor): Current state.
        batch (dict): Input batch containing data.
        step (int): Current optimization step.

    Returns:
        torch.Tensor: Updated state.
    """
    var_cost = self.prior_cost(state) + self.lbd**2 * self.obs_cost(state, batch)
    grad = torch.autograd.grad(var_cost, state, create_graph=True)[0]

    gmod = self.grad_mod(grad)
    state_update = (
        1 / (step + 1) * gmod
        + self.lr_grad * (step + 1) / self.n_step * grad
    )

    return state - state_update

Lit4dVarNet

Bases: LightningModule

A PyTorch Lightning module for training and testing 4D-VarNet models.

Attributes:

Name Type Description
solver GradSolver

The solver used for optimization.

rec_weight Tensor

Reconstruction weight for loss computation.

opt_fn callable

Function to configure the optimizer.

test_metrics dict

Dictionary of test metrics.

pre_metric_fn callable

Preprocessing function for metrics.

norm_stats tuple

Normalization statistics (mean, std).

persist_rw bool

Whether to persist reconstruction weight as a buffer.

Source code in ocean4dvarnet/models.py
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
class Lit4dVarNet(pl.LightningModule):
    """
    A PyTorch Lightning module for training and testing 4D-VarNet models.

    Attributes:
        solver (GradSolver): The solver used for optimization.
        rec_weight (torch.Tensor): Reconstruction weight for loss computation.
        opt_fn (callable): Function to configure the optimizer.
        test_metrics (dict): Dictionary of test metrics.
        pre_metric_fn (callable): Preprocessing function for metrics.
        norm_stats (tuple): Normalization statistics (mean, std).
        persist_rw (bool): Whether to persist reconstruction weight as a buffer.
    """

    def __init__(
        self, solver, rec_weight, opt_fn, test_metrics=None,
        pre_metric_fn=None, norm_stats=None, persist_rw=True
    ):
        """
        Initialize the Lit4dVarNet module.

        Args:
            solver (GradSolver): The solver used for optimization.
            rec_weight (numpy.ndarray): Reconstruction weight for loss computation.
            opt_fn (callable): Function to configure the optimizer.
            test_metrics (dict, optional): Dictionary of test metrics.
            pre_metric_fn (callable, optional): Preprocessing function for metrics.
            norm_stats (tuple, optional): Normalization statistics (mean, std).
            persist_rw (bool, optional): Whether to persist reconstruction weight as a buffer.
        """
        super().__init__()
        self.solver = solver
        self.register_buffer('rec_weight', torch.from_numpy(rec_weight), persistent=persist_rw)
        self.test_data = None
        self._norm_stats = norm_stats
        self.opt_fn = opt_fn
        self.metrics = test_metrics or {}
        self.pre_metric_fn = pre_metric_fn or (lambda x: x)

    @property
    def norm_stats(self):
        """
        Retrieve normalization statistics (mean, std).

        Returns:
            tuple: Normalization statistics (mean, std).
        """
        if self._norm_stats is not None:
            return self._norm_stats
        elif self.trainer.datamodule is not None:
            return self.trainer.datamodule.norm_stats()
        return (0., 1.)

    @staticmethod
    def weighted_mse(err, weight):
        """
        Compute the weighted mean squared error.

        Args:
            err (torch.Tensor): Error tensor.
            weight (torch.Tensor): Weight tensor.

        Returns:
            torch.Tensor: Weighted MSE loss.
        """
        err_w = err * weight[None, ...]
        non_zeros = (torch.ones_like(err) * weight[None, ...]) == 0.0
        err_num = err.isfinite() & ~non_zeros
        if err_num.sum() == 0:
            return torch.scalar_tensor(1000.0, device=err_num.device).requires_grad_()
        loss = F.mse_loss(err_w[err_num], torch.zeros_like(err_w[err_num]))
        return loss

    def training_step(self, batch, batch_idx):
        """
        Perform a single training step.

        Args:
            batch (dict): Input batch.
            batch_idx (int): Batch index.

        Returns:
            torch.Tensor: Training loss.
        """
        return self.step(batch, "train")[0]

    def validation_step(self, batch, batch_idx):
        """
        Perform a single validation step.

        Args:
            batch (dict): Input batch.
            batch_idx (int): Batch index.

        Returns:
            torch.Tensor: Validation loss.
        """
        return self.step(batch, "val")[0]

    def forward(self, batch):
        """
        Forward pass through the solver.

        Args:
            batch (dict): Input batch.

        Returns:
            torch.Tensor: Solver output.
        """
        return self.solver(batch)

    def step(self, batch, phase=""):
        """
        Perform a single step for training or validation.

        Args:
            batch (dict): Input batch.
            phase (str, optional): Phase ("train" or "val").

        Returns:
            tuple: Loss and output tensor.
        """
        if self.training and batch.tgt.isfinite().float().mean() < 0.9:
            return None, None

        loss, out = self.base_step(batch, phase)
        grad_loss = self.weighted_mse(kfilts.sobel(out) - kfilts.sobel(batch.tgt), self.rec_weight)
        prior_cost = self.solver.prior_cost(self.solver.init_state(batch, out))
        self.log(f"{phase}_gloss", grad_loss, prog_bar=True, on_step=False, on_epoch=True)

        training_loss = 50 * loss + 1000 * grad_loss + 1.0 * prior_cost
        return training_loss, out

    def base_step(self, batch, phase=""):
        """
        Perform the base step for loss computation.

        Args:
            batch (dict): Input batch.
            phase (str, optional): Phase ("train" or "val").

        Returns:
            tuple: Loss and output tensor.
        """
        out = self(batch=batch)
        loss = self.weighted_mse(out - batch.tgt, self.rec_weight)

        with torch.no_grad():
            self.log(f"{phase}_mse", 10000 * loss * self.norm_stats[1]**2, prog_bar=True, on_step=False, on_epoch=True)
            self.log(f"{phase}_loss", loss, prog_bar=True, on_step=False, on_epoch=True)

        return loss, out

    def configure_optimizers(self):
        """
        Configure the optimizer.

        Returns:
            torch.optim.Optimizer: Optimizer instance.
        """
        return self.opt_fn(self)

    def test_step(self, batch, batch_idx):
        """
        Perform a single test step.

        Args:
            batch (dict): Input batch.
            batch_idx (int): Batch index.
        """
        if batch_idx == 0:
            self.test_data = []
        out = self(batch=batch)
        m, s = self.norm_stats

        self.test_data.append(torch.stack(
            [
                batch.input.cpu() * s + m,
                batch.tgt.cpu() * s + m,
                out.squeeze(dim=-1).detach().cpu() * s + m,
            ],
            dim=1,
        ))

    @property
    def test_quantities(self):
        """
        Retrieve the names of test quantities.

        Returns:
            list: List of test quantity names.
        """
        return ['inp', 'tgt', 'out']

    def on_test_epoch_end(self):
        """
        Perform actions at the end of the test epoch.

        This includes logging metrics and saving test data.
        """
        rec_da = self.trainer.test_dataloaders.dataset.reconstruct(
            self.test_data, self.rec_weight.cpu().numpy()
        )

        if isinstance(rec_da, list):
            rec_da = rec_da[0]

        self.test_data = rec_da.assign_coords(
            dict(v0=self.test_quantities)
        ).to_dataset(dim='v0')

        metric_data = self.test_data.pipe(self.pre_metric_fn)
        metrics = pd.Series({
            metric_n: metric_fn(metric_data)
            for metric_n, metric_fn in self.metrics.items()
        })

        print(metrics.to_frame(name="Metrics").to_markdown())
        if self.logger:
            self.test_data.to_netcdf(Path(self.logger.log_dir) / 'test_data.nc')
            print(Path(self.trainer.log_dir) / 'test_data.nc')
            self.logger.log_metrics(metrics.to_dict())

norm_stats property

Retrieve normalization statistics (mean, std).

Returns:

Name Type Description
tuple

Normalization statistics (mean, std).

test_quantities property

Retrieve the names of test quantities.

Returns:

Name Type Description
list

List of test quantity names.

__init__(solver, rec_weight, opt_fn, test_metrics=None, pre_metric_fn=None, norm_stats=None, persist_rw=True)

Initialize the Lit4dVarNet module.

Parameters:

Name Type Description Default
solver GradSolver

The solver used for optimization.

required
rec_weight ndarray

Reconstruction weight for loss computation.

required
opt_fn callable

Function to configure the optimizer.

required
test_metrics dict

Dictionary of test metrics.

None
pre_metric_fn callable

Preprocessing function for metrics.

None
norm_stats tuple

Normalization statistics (mean, std).

None
persist_rw bool

Whether to persist reconstruction weight as a buffer.

True
Source code in ocean4dvarnet/models.py
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
def __init__(
    self, solver, rec_weight, opt_fn, test_metrics=None,
    pre_metric_fn=None, norm_stats=None, persist_rw=True
):
    """
    Initialize the Lit4dVarNet module.

    Args:
        solver (GradSolver): The solver used for optimization.
        rec_weight (numpy.ndarray): Reconstruction weight for loss computation.
        opt_fn (callable): Function to configure the optimizer.
        test_metrics (dict, optional): Dictionary of test metrics.
        pre_metric_fn (callable, optional): Preprocessing function for metrics.
        norm_stats (tuple, optional): Normalization statistics (mean, std).
        persist_rw (bool, optional): Whether to persist reconstruction weight as a buffer.
    """
    super().__init__()
    self.solver = solver
    self.register_buffer('rec_weight', torch.from_numpy(rec_weight), persistent=persist_rw)
    self.test_data = None
    self._norm_stats = norm_stats
    self.opt_fn = opt_fn
    self.metrics = test_metrics or {}
    self.pre_metric_fn = pre_metric_fn or (lambda x: x)

base_step(batch, phase='')

Perform the base step for loss computation.

Parameters:

Name Type Description Default
batch dict

Input batch.

required
phase str

Phase ("train" or "val").

''

Returns:

Name Type Description
tuple

Loss and output tensor.

Source code in ocean4dvarnet/models.py
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
def base_step(self, batch, phase=""):
    """
    Perform the base step for loss computation.

    Args:
        batch (dict): Input batch.
        phase (str, optional): Phase ("train" or "val").

    Returns:
        tuple: Loss and output tensor.
    """
    out = self(batch=batch)
    loss = self.weighted_mse(out - batch.tgt, self.rec_weight)

    with torch.no_grad():
        self.log(f"{phase}_mse", 10000 * loss * self.norm_stats[1]**2, prog_bar=True, on_step=False, on_epoch=True)
        self.log(f"{phase}_loss", loss, prog_bar=True, on_step=False, on_epoch=True)

    return loss, out

configure_optimizers()

Configure the optimizer.

Returns:

Type Description

torch.optim.Optimizer: Optimizer instance.

Source code in ocean4dvarnet/models.py
177
178
179
180
181
182
183
184
def configure_optimizers(self):
    """
    Configure the optimizer.

    Returns:
        torch.optim.Optimizer: Optimizer instance.
    """
    return self.opt_fn(self)

forward(batch)

Forward pass through the solver.

Parameters:

Name Type Description Default
batch dict

Input batch.

required

Returns:

Type Description

torch.Tensor: Solver output.

Source code in ocean4dvarnet/models.py
123
124
125
126
127
128
129
130
131
132
133
def forward(self, batch):
    """
    Forward pass through the solver.

    Args:
        batch (dict): Input batch.

    Returns:
        torch.Tensor: Solver output.
    """
    return self.solver(batch)

on_test_epoch_end()

Perform actions at the end of the test epoch.

This includes logging metrics and saving test data.

Source code in ocean4dvarnet/models.py
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
def on_test_epoch_end(self):
    """
    Perform actions at the end of the test epoch.

    This includes logging metrics and saving test data.
    """
    rec_da = self.trainer.test_dataloaders.dataset.reconstruct(
        self.test_data, self.rec_weight.cpu().numpy()
    )

    if isinstance(rec_da, list):
        rec_da = rec_da[0]

    self.test_data = rec_da.assign_coords(
        dict(v0=self.test_quantities)
    ).to_dataset(dim='v0')

    metric_data = self.test_data.pipe(self.pre_metric_fn)
    metrics = pd.Series({
        metric_n: metric_fn(metric_data)
        for metric_n, metric_fn in self.metrics.items()
    })

    print(metrics.to_frame(name="Metrics").to_markdown())
    if self.logger:
        self.test_data.to_netcdf(Path(self.logger.log_dir) / 'test_data.nc')
        print(Path(self.trainer.log_dir) / 'test_data.nc')
        self.logger.log_metrics(metrics.to_dict())

step(batch, phase='')

Perform a single step for training or validation.

Parameters:

Name Type Description Default
batch dict

Input batch.

required
phase str

Phase ("train" or "val").

''

Returns:

Name Type Description
tuple

Loss and output tensor.

Source code in ocean4dvarnet/models.py
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
def step(self, batch, phase=""):
    """
    Perform a single step for training or validation.

    Args:
        batch (dict): Input batch.
        phase (str, optional): Phase ("train" or "val").

    Returns:
        tuple: Loss and output tensor.
    """
    if self.training and batch.tgt.isfinite().float().mean() < 0.9:
        return None, None

    loss, out = self.base_step(batch, phase)
    grad_loss = self.weighted_mse(kfilts.sobel(out) - kfilts.sobel(batch.tgt), self.rec_weight)
    prior_cost = self.solver.prior_cost(self.solver.init_state(batch, out))
    self.log(f"{phase}_gloss", grad_loss, prog_bar=True, on_step=False, on_epoch=True)

    training_loss = 50 * loss + 1000 * grad_loss + 1.0 * prior_cost
    return training_loss, out

test_step(batch, batch_idx)

Perform a single test step.

Parameters:

Name Type Description Default
batch dict

Input batch.

required
batch_idx int

Batch index.

required
Source code in ocean4dvarnet/models.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def test_step(self, batch, batch_idx):
    """
    Perform a single test step.

    Args:
        batch (dict): Input batch.
        batch_idx (int): Batch index.
    """
    if batch_idx == 0:
        self.test_data = []
    out = self(batch=batch)
    m, s = self.norm_stats

    self.test_data.append(torch.stack(
        [
            batch.input.cpu() * s + m,
            batch.tgt.cpu() * s + m,
            out.squeeze(dim=-1).detach().cpu() * s + m,
        ],
        dim=1,
    ))

training_step(batch, batch_idx)

Perform a single training step.

Parameters:

Name Type Description Default
batch dict

Input batch.

required
batch_idx int

Batch index.

required

Returns:

Type Description

torch.Tensor: Training loss.

Source code in ocean4dvarnet/models.py
 97
 98
 99
100
101
102
103
104
105
106
107
108
def training_step(self, batch, batch_idx):
    """
    Perform a single training step.

    Args:
        batch (dict): Input batch.
        batch_idx (int): Batch index.

    Returns:
        torch.Tensor: Training loss.
    """
    return self.step(batch, "train")[0]

validation_step(batch, batch_idx)

Perform a single validation step.

Parameters:

Name Type Description Default
batch dict

Input batch.

required
batch_idx int

Batch index.

required

Returns:

Type Description

torch.Tensor: Validation loss.

Source code in ocean4dvarnet/models.py
110
111
112
113
114
115
116
117
118
119
120
121
def validation_step(self, batch, batch_idx):
    """
    Perform a single validation step.

    Args:
        batch (dict): Input batch.
        batch_idx (int): Batch index.

    Returns:
        torch.Tensor: Validation loss.
    """
    return self.step(batch, "val")[0]

weighted_mse(err, weight) staticmethod

Compute the weighted mean squared error.

Parameters:

Name Type Description Default
err Tensor

Error tensor.

required
weight Tensor

Weight tensor.

required

Returns:

Type Description

torch.Tensor: Weighted MSE loss.

Source code in ocean4dvarnet/models.py
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
@staticmethod
def weighted_mse(err, weight):
    """
    Compute the weighted mean squared error.

    Args:
        err (torch.Tensor): Error tensor.
        weight (torch.Tensor): Weight tensor.

    Returns:
        torch.Tensor: Weighted MSE loss.
    """
    err_w = err * weight[None, ...]
    non_zeros = (torch.ones_like(err) * weight[None, ...]) == 0.0
    err_num = err.isfinite() & ~non_zeros
    if err_num.sum() == 0:
        return torch.scalar_tensor(1000.0, device=err_num.device).requires_grad_()
    loss = F.mse_loss(err_w[err_num], torch.zeros_like(err_w[err_num]))
    return loss