A fast, distributed, high performance gradient boosting framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
WebJar for postcss-minify-gradients
This module defines the interface for AlignmentAlgorithms as well as some helper classes. An AlignmentAlgorithm computes an Alignment of two given input sequences, given a Comparator that works in these sequences. More details on the AlignmentAlgorithm can be found in the respective interface. More information on Comparators can be found in the comparators module. The resulting 'Alignment' may be just a real-valued dissimilarity between the input sequence or may incorporate additional information, such as a full Alignment, a PathList, a PathMap or a CooptimalModel. If those results support the calculation of a Gradient, they implement the DerivableAlignmentDistance interface. In more detail, the Alignment class represents the result of a backtracing scheme, listing all Operations that have been applied in one co-optimal Alignment. A classic AlignmentAlgorithm does not result in a differentiable dissimilarity, because the minimum function is not differentiable. Therefore, this package also contains utility functions for a soft approximation of the minimum function, namely Softmin. For faster (parallel) computation of many different alignments or gradients we also provide the ParallelProcessingEngine, the SquareParallelProcessingEngine and the ParallelGradientEngine.
Onboarding library for android with Gradient, Image or Solid color backgrounds.
orx-gradient-descent
orx-gradient-descent
orx-gradient-descent
This module defines the interfaces for Comparators in the TCS Alignment Toolbox. A Comparator has the purpose of defining the dissimilarity between elements in the input sequences of an Alignment. More specific information on Comparators can be found in the 'Comparator' interface. You can find a lot of helpful standard implementations of Comparators in the comparators-lib module. In the TCS Alignment Toolbox we require the output values of Comparators to lie in the range [0,1]. Many natural dissimilarities on value sets do not meet this criterion, such that additional normalization has to be applied. To that end this package also contains a Normalizer interface for functions that map real values from the range [0, infinity) to the range [0,1]. This package also provides a few convenience implementations of the Comparator interface to make the implementation of custom Comparators simpler, namely: SkipExtendedComparator, ParameterLessSkipExtendedComparator, ComparisonBasedSkipExtendedComparator, and ParameterLessComparisonBasedSkipExtendedComparator. Finally the TCS Alignment Toolbox also provides the means to learn parameters of Comparators. To enable that Comparators must implement the DerivableComparator interface to properly define the parameters that can be learned and the gradient of the dissimilarity with respect to these parameters. Gradients are stored using the Gradient interface as well as some convenience implementations of said interface, namely EmptyGradient, SingletonGradient, ArrayGradient and ListGradient.
The JWebSwing implementation for Linear Gradents IE10 and Less
The JWebSwing implementation for Linear Gradents IE10 and Less
WebJar for ndarray-gradient
RBFNetwork implements a normalized Gaussian radial basisbasis function network. It uses the k-means clustering algorithm to provide the basis functions and learns either a logistic regression (discrete class problems) or linear regression (numeric class problems) on top of that. Symmetric multivariate Gaussians are fit to the data from each cluster. If the class is nominal it uses the given number of clusters per class. RBFRegressor implements radial basis function networks for regression, trained in a fully supervised manner using WEKA's Optimization class by minimizing squared error with the BFGS method. It is possible to use conjugate gradient descent rather than BFGS updates, which is faster for cases with many parameters, and to use normalized basis functions instead of unnormalized ones.
This module is a custom implementation of the Large Margin Nearest Neighbor classification scheme of Weinberger, Saul, et al. (2009). It contains an implementation of the k-nearest neighbor and LMNN classifier as well as (most importantly) gradient calculation schemes on the LMNN cost function given a sequential data set and a user-choice of alignment algorithm. This enables users to learn parameters of the alignment distance in question using a gradient descent on the LMNN cost function. More information on this approach can be found in the Masters Thesis "Adaptive Affine Sequence Alignment Using Algebraic Dynamic Programming"
Encapsulated common Android controls, such as sletextbutton, sleimagebutton, sleconstraintlayout, sleframelayout, slelinearlayout, slerelativelayout, etc. Make the control have the functions of shape and selector, and save the cumbersome steps of writing shape or selector files. In addition, it supports n color gradients, which makes up for the deficiency that the native shape file only supports three colors (startcolor / centercolor / endcolor).
The JWebSwing implementation for Linear Gradents IE10 and Less
react-component is a sdk for development by react-native code.
A library of colors and gradient builders for Android built using Jetpack Compose
Used to set the font gradient color, gradient direction, and animation effect of the TexView.
Implements the stochastic variant of the Pegasos (Primal Estimated sub-GrAdient SOlver for SVM) method of Shalev-Shwartz et al. (2007). This implementation globally replaces all missing values and transforms nominal attributes into binary ones. It also normalizes all attributes, so the coefficients in the output are based on the normalized data. Can either minimize the hinge loss (SVM) or log loss (logistic regression). For more information, see S. Shalev-Shwartz, Y. Singer, N. Srebro: Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In: 24th International Conference on MachineLearning, 807-814, 2007.
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
The JWebSwing implementation for Linear Gradents IE10 and Less
WebJar for tinygradient
A convenient text gradient TextView
This package contains a classifier that can be used to train a two-class kernel logistic regression model with the kernel functions that are available in WEKA. It optimises the negative log-likelihood with a quadratic penalty. Both, BFGS and conjugate gradient descent, are available as optimisation methods, but the former is normally faster. It is possible to use multiple threads, but the speed-up is generally very marginal when used with BFGS optimisation. With conjugate gradient descent optimisation, greater speed-ups can be achieved when using multiple threads. With the default kernel, the dot product kernel, this method produces results that are close to identical to those obtained using standard logistic regression in WEKA, provided a sufficiently large value for the parameter determining the size of the quadratic penalty is used in both cases.
Gradient animation library, that provides Animated drawable resources and views for enhancing user interfaces.