TorchCraftAI
A bot for machine learning research on StarCraft: Brood War
|
General utilities. More...
Namespaces | |
fsutils | |
Utility functions for interacting with the file system. | |
zstd | |
Classes | |
class | AssertionFailure |
class | BufferedConsumer |
A simple producer/consumer class. More... | |
class | BufferedProducer |
A simple producer class. More... | |
class | CircularBuffer |
class | DataReader |
A multi-threaded reader for cerealized data. More... | |
struct | DataReader_NoTransform |
class | DataReaderIterator |
A multi-threaded iterator that performs decerealization of objects and returns data in batches. More... | |
class | DataReaderTransform |
Wrapper for DataReaderIterator that applies an additional transform to the resulting batches. More... | |
class | Exception |
class | IMembuf |
A stream buffer for reading from a vector of bytes. More... | |
class | LRUCache |
class | OMembuf |
A stream buffer for writing to an accessible vector of bytes. More... | |
class | Rand |
class | ScopeGuard |
struct | WeightSummary |
Collects metrics about a container's weights. More... | |
Typedefs | |
using | VarList = torch::autograd::variable_list |
using | HookFunction = std::function< VarList(const VarList &, const VarList &)> |
using | TensorTransform = std::function< torch::Tensor(torch::Tensor)> |
This is a convenience function to apply a tensor transformation to a complex type. More... | |
using | DataReaderThreadInitF = std::function< void()> |
using | hires_clock = std::chrono::steady_clock |
Enumerations | |
enum | PadType { PadType::Zero, PadType::Reflection, PadType::Replication } |
enum | ConcatType { ConcatType::None, ConcatType::Input, ConcatType::Mirror } |
enum | UpsamplingType { UpsamplingType::None, UpsamplingType::Bilin, UpsamplingType::Deconv } |
enum | DecodeType { DecodeType::None, DecodeType::Conv, DecodeType::Deconv } |
enum | DilationScheme { DilationScheme::None, DilationScheme::Linear, DilationScheme::Exponential } |
enum | UpsampleMode { UpsampleMode::Nearest, UpsampleMode::Linear, UpsampleMode::Bilinear, UpsampleMode::Trilinear } |
Mimics pytorch's upsample function. More... | |
Functions | |
backward::StackTrace | createStackTrace () |
std::string | tensorInfo (torch::Tensor x) |
Returns a string containing the tensor type and sizes. More... | |
std::string | variantInfo (ag::Variant x) |
Returns a string describing the content of a variant. More... | |
std::string | tensorStats (torch::Tensor x) |
Returns a string containing the tensor info, the max/min/mean and sum. More... | |
void | checkTensor (torch::Tensor x, bool logOnError=true) |
Throws if the given float tensor has a NaN or +/- infinity. More... | |
torch::Tensor const & | addHook (torch::Tensor const &tensor, HookFunction &&f) |
Adds a hook to the backwards of the variable. More... | |
void | assertSize (const std::string &name, const torch::Tensor &tensor, at::IntList sizes) |
Verifies that a tensor's dimension sizes match expectations. More... | |
std::ostream & | operator<< (std::ostream &out, const WeightSummary &summary) |
std::pair< int64_t, int64_t > | torchMemoryUsage (int device=0) |
Show the current memory usage, the first element is the amount allocated, or currently used by tensors that are alive, and the second element is the amount cached by the caching allocator. More... | |
torch::Tensor | normalPDF (torch::Tensor x, torch::Tensor mean, torch::Tensor std) |
Compute the PDF of the normal law. More... | |
torch::Tensor | normalPDF (torch::Tensor x, torch::Tensor mean, double std) |
AUTOGRAD_CONTAINER_CLASS (MLP) | |
Simple MLP of nLayers layers, with hidden size all being the same. More... | |
AUTOGRAD_CONTAINER_CLASS (GatedConv) | |
AUTOGRAD_CONTAINER_CLASS (ConvBlock) | |
Simple convolutional block, with optional residual connection From a user perspective, the convolution parameters behave the same as if the block was a single conv layer. More... | |
AUTOGRAD_CONTAINER_CLASS (EncoderDecoder) | |
AUTOGRAD_CONTAINER_CLASS (MHAttention) | |
torch::Tensor | repeat2d (torch::Tensor data, at::IntList sizes) |
Repeat a 1D tensor so that you end up with a (#channels, sizes[0], size[1]) tensor. More... | |
torch::Tensor | scatterSum2d (torch::Tensor positions, torch::Tensor data, at::IntList sizes) |
Scatter data into dest at given positions. More... | |
torch::Tensor | makeBatch (ag::tensor_list const &, double pad=0) |
Equivalent to a stack along dim 0 of the input, but with the values padded correct so the size is rectangular. More... | |
ag::Variant | makeBatchVariant (const std::vector< ag::Variant > &queries, double pad=0) |
This function works similarly as makeBatch but handles more input types. More... | |
std::vector< ag::Variant > | unBatchVariant (ag::Variant const &batch, int stride=1, bool maskOut=false, double maskValue=-1) |
This function is the opposite of makeBatchVariant. More... | |
torch::Tensor | pad2d (torch::Tensor input, at::IntList pad) |
Zero-padding (only supports 3d input) More... | |
torch::Tensor | padNd (torch::Tensor input, at::IntList pad) |
Zero-padding (for any number of dimensions) For every dimensions of the input, pad contains 2 elements: the padding before and after along this dimension. More... | |
torch::Tensor | flip (torch::Tensor x, int dim) |
Flips a tensor along a given dimension. More... | |
torch::Tensor | upsample (torch::Tensor input, UpsampleMode mode, at::IntList size) |
torch::Tensor | upsample (torch::Tensor input, UpsampleMode mode, int scaleFactor) |
void | zerosToOnes_ (torch::Tensor x) |
Replace (in-place) all zeroes of x by ones. More... | |
torch::Tensor | tensorFromNpyArray (cnpy::NpyArray array, torch::TensorOptions op) |
torch::Tensor | squash (torch::Tensor x, int i, int j) |
Squash contiguous dimensions of a tensor into a single dimension. More... | |
torch::Tensor | unsquash (torch::Tensor x, int i, at::IntList sizes) |
Unsquash a dimension of a tensor into several dimensions. More... | |
torch::Tensor | maskedSum (torch::Tensor x, torch::Tensor mask) |
Sum x across non-masked indices. More... | |
torch::Tensor | maskedMean (torch::Tensor x, torch::Tensor mask) |
Average x over non-masked indices, returning 0 if all indices are masked. More... | |
torch::Tensor | mseLoss (torch::Tensor x, torch::Tensor y, torch::Tensor mask, bool sizeAverage=true, bool reduce=true) |
Computes the MSE loss between x and y. More... | |
torch::Tensor | crossEntropyLoss (torch::Tensor input, int dim, torch::Tensor target, torch::Tensor weight, torch::Tensor mask, Reduction::Reduction reduction) |
torch::Tensor | nllLoss (torch::Tensor input, int dim, torch::Tensor target, torch::Tensor weight, torch::Tensor mask, Reduction::Reduction reduction) |
void | clipGradientNorms (std::vector< torch::Tensor > parameters, float maxNorm) |
Rescale gradients so that the norm of all gradients (concatenated) is smaller than maxNorm. More... | |
torch::Tensor | maskedSoftmax (torch::Tensor input, torch::Tensor mask, int dim, float clampEpsilon=0) |
Compute a masked softmax of a tensor in a numerically stable way by removing the max value before exponentiating. More... | |
std::tuple< torch::Tensor, torch::Tensor > | maskedMax (torch::Tensor input, torch::Tensor mask, int dim, bool keepDim=false) |
Compute a masked max/argmax of a tensor. More... | |
torch::Tensor | weightedMaskedSoftmax (torch::Tensor input, torch::Tensor mask, int dim, float clampEpsilon=0) |
Compute a weighted masked softmax of a tensor in a numerically stable way by removing the max value before exponentiating. More... | |
torch::Tensor | selectIndex (torch::Tensor x, torch::Tensor y, int axis, bool keepDim) |
torch::Tensor | extendIndex (torch::Tensor y, int axis, int d) |
Returns a byte tensor x such that selectIndex(x, y, axis) are only 1s. More... | |
void | maskedCopy_ (torch::Tensor x, torch::Tensor mask, torch::Tensor source) |
For 1D tensors, this is equivalent to: x[i] <- source[i] if mask[i] == 1. More... | |
torch::Tensor | maskedCopy (torch::Tensor x, torch::Tensor mask, torch::Tensor source) |
Immutable masked copy (equivalent to x.clone().maskedCopy_()). More... | |
void | putNd_ (torch::Tensor x, torch::Tensor index, torch::Tensor source, bool accumulate=false) |
Copies elements from source into x at positions determined by index. More... | |
torch::Tensor | takeNd (torch::Tensor x, torch::Tensor index) |
Inverse operation of putNd_. More... | |
torch::Tensor | indexMean (int size, int dim, torch::Tensor index, torch::Tensor source) |
Like zeros.index_add_ but with the mean. More... | |
torch::Tensor | unsqueezes (int before, torch::Tensor x, int after) |
Do multiple unsqueezes on first and last dimensions. More... | |
torch::Tensor | meshGrid (ag::tensor_list tensors) |
Takes N 1D tensors xi of size Xi and returns a tensor y of size X1 x ... More... | |
ag::Variant | applyTransform (ag::Variant input, const TensorTransform &fun) |
at::Device | getVariantDevice (ag::Variant const &x) |
Utility to get the device of a variant. More... | |
bool | gpuAvailable () |
Checks if a CUDA GPU is available. More... | |
std::string | toHex (std::vector< uint8_t > const &digest) |
std::vector< uint8_t > | sha256sum (void const *data, size_t len) |
std::vector< uint8_t > | md5sum (void const *data, size_t len) |
std::vector< uint8_t > | sha256sum (std::string_view data) |
std::vector< uint8_t > | sha256sum (std::vector< uint8_t > const &data) |
std::vector< uint8_t > | md5sum (std::string_view data) |
std::vector< uint8_t > | md5sum (std::vector< uint8_t > const &data) |
template<typename T , typename F > | |
std::unique_ptr< DataReaderTransform< T, F > > | makeDataReaderTransform (std::unique_ptr< DataReaderIterator< T >> &&it, F &&function, DataReaderThreadInitF init=DataReader_NoopF) |
template<typename T > | |
auto | makeDataReader (std::vector< std::string > paths, size_t numThreads, size_t batchSize, std::string pathPrefix=std::string(), DataReaderThreadInitF init=DataReader_NoopF) |
template<typename T , typename F > | |
auto | makeDataReader (std::vector< std::string > paths, size_t numThreads, size_t batchSize, F transform, std::string pathPrefix=std::string(), DataReaderThreadInitF init=DataReader_NoopF) |
template<typename Enumeration > | |
auto | enumAsInt (Enumeration const value) -> typename std::underlying_type< Enumeration >::type |
template<class Function > | |
ScopeGuard< Function > | makeGuard (Function f) |
template<class T , class Compare > | |
constexpr const T & | clamp (const T &v, const T &lo, const T &hi, Compare comp) |
template<class T > | |
constexpr const T & | clamp (const T &v, const T &lo, const T &hi) |
template<class T > | |
constexpr const T & | safeClamp (const T &v1, const T &v2, const T &v3) |
std::string | randId (size_t len) |
template<typename Iter , typename RandomGenerator > | |
Iter | select_randomly (Iter start, Iter end, RandomGenerator &g) |
std::vector< std::string > | stringSplit (char const *str, size_t len, char sep, size_t max) |
Split a string into parts deliminted by the given separtion character. More... | |
std::vector< std::string > | stringSplit (char const *str, char sep, size_t max) |
std::vector< std::string > | stringSplit (std::string const &str, char sep, size_t max) |
bool | startsWith (std::string const &str, std::string const &prefix) |
bool | endsWith (std::string const &str, std::string const &suffix) |
bool | gmatch (std::string_view str, std::string_view pattern) |
Glob-style pattern matching. More... | |
bool | gmatchi (std::string_view str, std::string_view pattern) |
Glob-style pattern matching (case-insensitive) More... | |
template<typename T > | |
std::string | stringToLower (T &&str) |
template<typename T > | |
std::string | joinVector (std::vector< T > const &v, char sep) |
double | memoryUsage () |
void | setCurrentThreadName (std::string const &name) |
double | timestamp (std::chrono::system_clock::time_point tp=std::chrono::system_clock::now()) |
Variables | |
auto const | DataReader_NoopF = [] {} |
General utilities.
using common::DataReaderThreadInitF = typedef std::function<void()> |
using common::hires_clock = typedef std::chrono::steady_clock |
using common::HookFunction = typedef std::function<VarList(const VarList&, const VarList&)> |
using common::TensorTransform = typedef std::function<torch::Tensor(torch::Tensor)> |
This is a convenience function to apply a tensor transformation to a complex type.
For example, you would like to write something like t = t.view(-1), but t is a tensor_list (and you'd like the operation to be applied to each element of the list). You can write instead t = applyTransform(t, [](torch::Tensor t){return t.view(-1);});
using common::VarList = typedef torch::autograd::variable_list |
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
torch::Tensor const & common::addHook | ( | torch::Tensor const & | tensor, |
HookFunction && | f | ||
) |
Adds a hook to the backwards of the variable.
The hook function takes gradInput and gradOutput, and should by default return gradInput, that is, the identity function looks like: [](VarList const& gradInp, Varlist const& gradOutp) { return gradInp; }
https://pytorch.org/docs/stable/nn.html?highlight=hook#torch.nn.Module.register_backward_hook
ag::Variant common::applyTransform | ( | ag::Variant | input, |
const TensorTransform & | fun | ||
) |
void common::assertSize | ( | const std::string & | name, |
const torch::Tensor & | tensor, | ||
at::IntList | sizes | ||
) |
Verifies that a tensor's dimension sizes match expectations.
If a dimension is negative (e.g. -1) it won't be checked. Throws a std::range_error if they don't.
common::AUTOGRAD_CONTAINER_CLASS | ( | MLP | ) |
Simple MLP of nLayers layers, with hidden size all being the same.
Optionally, we can zero the last layer, which is useful if the output is suppose to be a probability distribution since values will be uniform after softmax
common::AUTOGRAD_CONTAINER_CLASS | ( | GatedConv | ) |
common::AUTOGRAD_CONTAINER_CLASS | ( | ConvBlock | ) |
Simple convolutional block, with optional residual connection From a user perspective, the convolution parameters behave the same as if the block was a single conv layer.
For example if the stride is 2, the output will be twice smaller than the input, irrespective of the number of inner layers. In practice the stride and dilation are only applied to the first layer. The block also applies padding to compensate for the kernel size and the dilation. That means that if the input is size hxw, the output will be h'xw' with h' = (h - 1)/stride + 1 and w' = (w-1)/stride + 1
Number of feature channels in the input
Number of feature channels in the output
Non linearity inserted between each convolution
If true, the module performs transposed convolutions instead
Size of the convolution kernels (we use kernelSize X kernelSize)
Stride of the convolutions
Dilation of the convolutions
Add a residual convolution when true
Add a batchNorm layers where appropriate, if true
If true, the intermediate convolutions will have 4 times less features than the output
Number of convolution layers
Bias in the convolutions
Whether to use gated convolutions
How to pad
common::AUTOGRAD_CONTAINER_CLASS | ( | EncoderDecoder | ) |
Shape of the input, given as [c,h,w], where c is the number of channels, h is the height and w the width
Number of feature channels in the intermediate layers
Number of feature channels in the output
Non linearity inserted between each convolution
Strategy for concatening previous layers during decoding
Strategy for upsampling, when needed
Strategy for decoding
Strategy for dilation
Size of the convolution kernels (we use kernelSize X kernelSize)
Stride of the convolutions
Add a residual convolution when true
Add a batchNorm layers where appropriate, if true
If true, the intermediate convolutions will have 4 times less features than the output
Number of Convolutional blocks in the encoding (if there is decoding, it will contain the same amount of blocks)
Number of convolution layers in each block
Bias in the convolutions
Whether to use gated convolutions
common::AUTOGRAD_CONTAINER_CLASS | ( | MHAttention | ) |
void common::checkTensor | ( | torch::Tensor | x, |
bool | logOnError | ||
) |
Throws if the given float tensor has a NaN or +/- infinity.
constexpr const T& common::clamp | ( | const T & | v, |
const T & | lo, | ||
const T & | hi, | ||
Compare | comp | ||
) |
constexpr const T& common::clamp | ( | const T & | v, |
const T & | lo, | ||
const T & | hi | ||
) |
void common::clipGradientNorms | ( | std::vector< torch::Tensor > | parameters, |
float | maxNorm | ||
) |
Rescale gradients so that the norm of all gradients (concatenated) is smaller than maxNorm.
backward::StackTrace common::createStackTrace | ( | ) |
torch::Tensor common::crossEntropyLoss | ( | torch::Tensor | input, |
int | dim, | ||
torch::Tensor | target, | ||
torch::Tensor | weight, | ||
torch::Tensor | mask, | ||
Reduction::Reduction | reduction | ||
) |
bool common::endsWith | ( | std::string const & | str, |
std::string const & | suffix | ||
) |
auto common::enumAsInt | ( | Enumeration const | value | ) | -> typename std::underlying_type<Enumeration>::type |
torch::Tensor common::extendIndex | ( | torch::Tensor | y, |
int | axis, | ||
int | d | ||
) |
Returns a byte tensor x such that selectIndex(x, y, axis) are only 1s.
y is of shape ... x 1 x ... x is of shape ... x d x ...
torch::Tensor common::flip | ( | torch::Tensor | x, |
int | dim | ||
) |
Flips a tensor along a given dimension.
y[-a-,i,-b-] = x[-a-,n-i-1,-b-]
at::Device common::getVariantDevice | ( | ag::Variant const & | x | ) |
Utility to get the device of a variant.
If the variants contains several tensors, we assume they have the same device
bool common::gmatch | ( | std::string_view | str, |
std::string_view | pattern | ||
) |
Glob-style pattern matching.
bool common::gmatchi | ( | std::string_view | str, |
std::string_view | pattern | ||
) |
Glob-style pattern matching (case-insensitive)
bool common::gpuAvailable | ( | ) |
Checks if a CUDA GPU is available.
torch::Tensor common::indexMean | ( | int | size, |
int | dim, | ||
torch::Tensor | index, | ||
torch::Tensor | source | ||
) |
Like zeros.index_add_ but with the mean.
source has shape X1 x ... Xdim-1 x N x Xdim+1 x ... Xd index has shape N, with values ranging from 0 to size - 1 x (return value) has shape X1 x ... Xdim-1 x size x Xdim+1 x ... x Xd x[-a-,i,-b-] is the mean of {source[-a-,j,-b-] where index[j]=i} and zero if this set is empty.
std::string common::joinVector | ( | std::vector< T > const & | v, |
char | sep | ||
) |
torch::Tensor common::makeBatch | ( | ag::tensor_list const & | , |
double | pad = 0 |
||
) |
Equivalent to a stack along dim 0 of the input, but with the values padded correct so the size is rectangular.
For example, if the list has size [(6, 2), (5, 2), (7, 3)] The result is a tensor of (3, 7, 3)
ag::Variant common::makeBatchVariant | ( | const std::vector< ag::Variant > & | queries, |
double | pad = 0 |
||
) |
This function works similarly as makeBatch but handles more input types.
The queries variants are requested to be the same type (we tolerate to mix tensors and size 1 tensor_list) It behaves as follows, depending on the variant type:
auto common::makeDataReader | ( | std::vector< std::string > | paths, |
size_t | numThreads, | ||
size_t | batchSize, | ||
std::string | pathPrefix = std::string() , |
||
DataReaderThreadInitF | init = DataReader_NoopF |
||
) |
auto common::makeDataReader | ( | std::vector< std::string > | paths, |
size_t | numThreads, | ||
size_t | batchSize, | ||
F | transform, | ||
std::string | pathPrefix = std::string() , |
||
DataReaderThreadInitF | init = DataReader_NoopF |
||
) |
std::unique_ptr<DataReaderTransform<T, F> > common::makeDataReaderTransform | ( | std::unique_ptr< DataReaderIterator< T >> && | it, |
F && | function, | ||
DataReaderThreadInitF | init = DataReader_NoopF |
||
) |
ScopeGuard<Function> common::makeGuard | ( | Function | f | ) |
torch::Tensor common::maskedCopy | ( | torch::Tensor | x, |
torch::Tensor | mask, | ||
torch::Tensor | source | ||
) |
Immutable masked copy (equivalent to x.clone().maskedCopy_()).
NOTE: this does not work if x contains NaNs or infinities! mask should have the same type as x.
void common::maskedCopy_ | ( | torch::Tensor | x, |
torch::Tensor | mask, | ||
torch::Tensor | source | ||
) |
For 1D tensors, this is equivalent to: x[i] <- source[i] if mask[i] == 1.
std::tuple< torch::Tensor, torch::Tensor > common::maskedMax | ( | torch::Tensor | input, |
torch::Tensor | mask, | ||
int | dim, | ||
bool | keepDim = false |
||
) |
Compute a masked max/argmax of a tensor.
The passed in mask must be a variable of 0.0's and 1.0's (floats) of the same shape as the input.
Returns the output after masking and softmaxing. NOTE: behavior is undefined if mask is zero for some batch.
torch::Tensor common::maskedMean | ( | torch::Tensor | x, |
torch::Tensor | mask | ||
) |
Average x over non-masked indices, returning 0 if all indices are masked.
This does not work if x contains NaNs or infinities at masked indices.
torch::Tensor common::maskedSoftmax | ( | torch::Tensor | input, |
torch::Tensor | mask, | ||
int | dim, | ||
float | clampEpsilon = 0 |
||
) |
Compute a masked softmax of a tensor in a numerically stable way by removing the max value before exponentiating.
The passed in mask must be a variable of 0.0's and 1.0's (floats) of the same shape as the input.
Returns the output after masking and softmaxing.
torch::Tensor common::maskedSum | ( | torch::Tensor | x, |
torch::Tensor | mask | ||
) |
Sum x across non-masked indices.
This does not work if x contains NaNs or infinities at masked indices.
|
inline |
|
inline |
std::vector< uint8_t > common::md5sum | ( | void const * | data, |
size_t | len | ||
) |
double common::memoryUsage | ( | ) |
torch::Tensor common::meshGrid | ( | ag::tensor_list | tensors | ) |
Takes N 1D tensors xi of size Xi and returns a tensor y of size X1 x ...
x XN x N such that y[a1]...[aN][i] = xi[ai].
torch::Tensor common::mseLoss | ( | torch::Tensor | x, |
torch::Tensor | y, | ||
torch::Tensor | mask, | ||
bool | sizeAverage = true , |
||
bool | reduce = true |
||
) |
Computes the MSE loss between x and y.
torch::Tensor common::nllLoss | ( | torch::Tensor | input, |
int | dim, | ||
torch::Tensor | target, | ||
torch::Tensor | weight, | ||
torch::Tensor | mask, | ||
Reduction::Reduction | reduction | ||
) |
torch::Tensor common::normalPDF | ( | torch::Tensor | x, |
torch::Tensor | mean, | ||
torch::Tensor | std | ||
) |
Compute the PDF of the normal law.
torch::Tensor common::normalPDF | ( | torch::Tensor | x, |
torch::Tensor | mean, | ||
double | std | ||
) |
std::ostream & common::operator<< | ( | std::ostream & | out, |
const WeightSummary & | summary | ||
) |
torch::Tensor common::pad2d | ( | torch::Tensor | input, |
at::IntList | pad | ||
) |
Zero-padding (only supports 3d input)
torch::Tensor common::padNd | ( | torch::Tensor | input, |
at::IntList | pad | ||
) |
Zero-padding (for any number of dimensions) For every dimensions of the input, pad contains 2 elements: the padding before and after along this dimension.
void common::putNd_ | ( | torch::Tensor | x, |
torch::Tensor | index, | ||
torch::Tensor | source, | ||
bool | accumulate = false |
||
) |
Copies elements from source into x at positions determined by index.
If accumulate is true, adds instead of copy (otherwise, indices should appear at most once in index). x has shape X1 x .. x XD index has shape N x D source has shape N For 2D tensors, this is equivalent to: x[index[i][0], index[i][1]] <- source[i]
std::string common::randId | ( | size_t | len | ) |
torch::Tensor common::repeat2d | ( | torch::Tensor | data, |
at::IntList | sizes | ||
) |
Repeat a 1D tensor so that you end up with a (#channels, sizes[0], size[1]) tensor.
constexpr const T& common::safeClamp | ( | const T & | v1, |
const T & | v2, | ||
const T & | v3 | ||
) |
torch::Tensor common::scatterSum2d | ( | torch::Tensor | positions, |
torch::Tensor | data, | ||
at::IntList | sizes | ||
) |
Scatter data into dest at given positions.
Depending on device that data lives on, different algorithms will be used:
There's a benchmark for this function in the corresponding unit tests.
positions is a (b, n, 2) integer tensor with elements greater than or equal to zero. positions[i][0] refers to the Y position, positions[i][1] to the X position of the data entry i. data is a (b, n, c) tensor. Each of the n entries will be placed in dest according to the respective position. Entries on each batch until the first negative entry will be considered. sizes is the {H, W} tuple of the size of the plane to scatter onto.
For single element, it's sufficient to unsqueeze(0) for it to look batched. Positions don't have to be unique – this function performs sum-pooling by default. The output is of size (b, c, y, x), similar to the input to a convnet.
Iter common::select_randomly | ( | Iter | start, |
Iter | end, | ||
RandomGenerator & | g | ||
) |
torch::Tensor common::selectIndex | ( | torch::Tensor | x, |
torch::Tensor | y, | ||
int | axis, | ||
bool | keepDim | ||
) |
|
inline |
|
inline |
|
inline |
std::vector< uint8_t > common::sha256sum | ( | void const * | data, |
size_t | len | ||
) |
torch::Tensor common::squash | ( | torch::Tensor | x, |
int | i, | ||
int | j | ||
) |
Squash contiguous dimensions of a tensor into a single dimension.
The dimensions [i..j] (both included) will be squashed into a single one. So if x is of size s_1 x ... x s_d, the returned tensor will be a view of x of size s_1 x ... x s_i-1 x s_i * s_i+1 * ... * s_j x s_j+1 x ... x s_d.
bool common::startsWith | ( | std::string const & | str, |
std::string const & | prefix | ||
) |
std::vector< std::string > common::stringSplit | ( | char const * | str, |
size_t | len, | ||
char | sep, | ||
size_t | max | ||
) |
Split a string into parts deliminted by the given separtion character.
This will repeatedly call getline()
with sep
as the delimitation character. If max
is >= 0, at most max
splits will be performed (cf. Python's split() function).
std::vector< std::string > common::stringSplit | ( | char const * | str, |
char | sep, | ||
size_t | max | ||
) |
std::vector< std::string > common::stringSplit | ( | std::string const & | str, |
char | sep, | ||
size_t | max | ||
) |
std::string common::stringToLower | ( | T && | str | ) |
torch::Tensor common::takeNd | ( | torch::Tensor | x, |
torch::Tensor | index | ||
) |
Inverse operation of putNd_.
x has shape X1 x .. x Xd index has shape N x d y (return value) has shape N For 2D tensors, this is equivalent to: y[i] = x[index[i][0], index[i][1]];
torch::Tensor common::tensorFromNpyArray | ( | cnpy::NpyArray | array, |
torch::TensorOptions | op | ||
) |
std::string common::tensorInfo | ( | torch::Tensor | x | ) |
Returns a string containing the tensor type and sizes.
std::string common::tensorStats | ( | torch::Tensor | x | ) |
Returns a string containing the tensor info, the max/min/mean and sum.
|
inline |
std::string common::toHex | ( | std::vector< uint8_t > const & | digest | ) |
std::pair< int64_t, int64_t > common::torchMemoryUsage | ( | int | device = 0 | ) |
Show the current memory usage, the first element is the amount allocated, or currently used by tensors that are alive, and the second element is the amount cached by the caching allocator.
WARNING: This function will call cudaDeviceSynchronize, so it's extremely expensive, and should not be in any training runs unless it's hidden behind an if statement.
std::vector< ag::Variant > common::unBatchVariant | ( | ag::Variant const & | batch, |
int | stride = 1 , |
||
bool | maskOut = false , |
||
double | maskValue = -1 |
||
) |
This function is the opposite of makeBatchVariant.
It assumes that the tensors to be found in the batch have a first dimension of size b, interpreted as the batch dimension. It will take slices of size
torch::Tensor common::unsquash | ( | torch::Tensor | x, |
int | i, | ||
at::IntList | sizes | ||
) |
Unsquash a dimension of a tensor into several dimensions.
Replace the i-th dimension of x by sizes (this augments the number of dimensions of x). The product of the elements of sizes should be x.size(i) (sizes can also contain a -1). If x is of size s_1 x ... x s_d, the returned tensor will be a view of x of size s_1 x ... x s_i-1 x sizes x s_i+1 x ... s_d.
torch::Tensor common::unsqueezes | ( | int | before, |
torch::Tensor | x, | ||
int | after | ||
) |
Do multiple unsqueezes on first and last dimensions.
torch::Tensor common::upsample | ( | torch::Tensor | input, |
UpsampleMode | mode, | ||
at::IntList | size | ||
) |
torch::Tensor common::upsample | ( | torch::Tensor | input, |
UpsampleMode | mode, | ||
int | scaleFactor | ||
) |
std::string common::variantInfo | ( | ag::Variant | x | ) |
Returns a string describing the content of a variant.
torch::Tensor common::weightedMaskedSoftmax | ( | torch::Tensor | input, |
torch::Tensor | mask, | ||
int | dim, | ||
float | clampEpsilon = 0 |
||
) |
Compute a weighted masked softmax of a tensor in a numerically stable way by removing the max value before exponentiating.
The passed in mask must be a variable of floats of the same shape as the input. It should include weighting and masking as desired (it need not be binary).
Returns the output after weighting, masking, and softmaxing.
void common::zerosToOnes_ | ( | torch::Tensor | x | ) |
Replace (in-place) all zeroes of x by ones.
auto const common::DataReader_NoopF = [] {} |