Download & Install

Hardware requirements

Warp is built to perform all calculations on GPUs using Nvidia’s CUDA platform. Thus, your system needs to have at least one recent Nvidia GPU with at least 8 GB memory. Consumer-grade GeForce models are just as fine as expensive Quadros and Teslas. The minimum architecture is Maxwell. This comprehensive table will tell you what’s Maxwell or better.

Our own configuration for running Warp on-the-fly in the microscope room is as follows:

AMD Threadripper with 16 cores
128 GB RAM
4x GeForce 1080 Ti (please read the next paragraph)
256 GB SSD
10 Gbit Ethernet
4K Display (UI claustrophobia sets in below 2560×1440)

The current version can handle ca. 200 TIFF movies (K2 counting, 40 frames) per hour on a single GeForce 1080 GPU. This is likely faster than most microscopes can generate data. If you have a multi-GPU system like the one above, you can use it to handle the output of multiple microscopes using separate Warp instances. Just launch the executable once per microscope and make sure to select different GPUs for processing in the window bar.

 

Software prerequisites

Windows 7 or later (10 if you want to see what the progress bar mouse eats! 🐭🍓)
.NET Framework 4.7 or later
Visual C++ 2017 Redistributable
Latest GPU driver from Nvidia

 

Amazon cloud

If you don’t want to invest in your own hardware just yet, you can use (or test) Warp on an AWS instance thanks to Michael Cianfrocco. Here is Michael’s user guide.

 

Download

Warp 1.0.6 installer (what’s new?)
BoxNet training data
BoxNet models (select latest)

 

Installation

Make sure all prerequisites are installed, and execute Warp’s installer. By default, the installer will suggest the current user’s program directory. If you’d like to make Warp available to other users (although running it under a single account is probably more convenient), please make sure the installation directory can be written to without administrator privileges. Otherwise, application settings and re-trained BoxNet models cannot be saved.

Download the latest BoxNet models. There are two prefixes: BoxNet2 and BoxNet2Mask – please refer to the BoxNet user guide page to find out what they mean. If you’d like to be able to re-train BoxNet models using the central data set in addition to your data, download the training data as well. Extract the models into [Warp directory]/boxnet2models, and the training data into [Warp directory]/boxnet2data.

If you’re updating, just install the new version over the previous one – no need to uninstall first. All settings and BoxNet models will be preserved.

That’s it! Enjoy Warp 😘

 

Compute environment

For full on-the-fly processing, Warp needs to be able to write to a file storage your cluster can access. For 90+ movies per hour, fast network connections are highly recommended. Our own network looks like this:

Compute environment

 

Change log

1.0.6

  • Improved navigation
  • Stability fixes
  • Updated BoxNet model
  • Tilt series import from IMOD
  • Major changes in the handling of TIFF files, see details
  • File export required for RELION’s Bayesian particle polishing
  • Experimental GUI for multi-particle refinement of tilt series, M
  • Volume denoising in a separate command line tool
  • Command line tool for generating pretty angular distribution plots from RELION’s output
  • CUDA 10 – this will likely require a GPU driver update

1.0.5

  • Micrograph denoising, making even very low-defocus particles visible.
  • Faster generation of goodparticles_*.star files in on-the-fly mode.
  • Average |CTF| plot in Overview tab to help you optimize the defocus range.
  • During particle export, the weights for dose weighting are now re-normalized to sum up to 1.
  • Better crash diagnostics.

1.0.4

  • More flexible handling of gain references: In addition to the absolute path, the reference’s unique SHA1 hash is stored in each item’s processing settings. If you relocate or rename the gain reference later, items won’t be marked as outdated as long as the reference’s content stays the same.
  • Flip and transpose operations for the gain reference image available in the UI.
  • DM4 format support.
  • More nagging about submitting BoxNet training data to the central repository.

1.0.3W

  • Faster processing: Running the full pipeline, you can now process 200+ movies per hour on a single GeForce 1080 GPU.

1.0.1

  • Initial release.