what :
Home > Search > value

Objectspage : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
v.noisegate External v.noisegate sets all pixels with brightnesses < noise threshold to zero.
v.noisegate sets all pixels with brightnesses < noise threshold to zero. The operation can be set to work on pseudo-signed streams (meaning that 128 is assumed to represent 0, and values that are less than noise threshold away from zero (either above or below) are set to 128
v.packyuv External v.packyuv takes 3 gray scale streams and combines them into an yuv image.
v.packyuv takes 3 gray scale streams and combines them into an yuv image. N.B. the u and v streams are assumed to be signed. 0 - 127 are positive values and 128 - 255 are actually -128 to -1. (Note that this is different from the pseudo-signed images like those from v.motion (signed mode) where 0 - 127 are really negative and 128 - 255 are positive. To convert an unsigned or pseudo-signed int8 stream to a signed int8 stream or vice versa, you can use a v.xor 128 object.
v.packy_uv External v.packy_uv takes 2 gray scale streams and combines them into an yuv image.
v.packy_uv takes 2 gray scale streams and combines them into an yuv image. The first stream is the y values. The second stream contains the u and v values (alternating u then v then u) each at 1/2 the resolution of the y components. The yyyy stream and the uvuv streams are packed into a yuyvyuyv stream (normal yuv) N.B. the u and v streams are assumed to be signed. 0 - 127 are positive values and 128 - 255 are actually -128 to -1. (Note that this is different from the pseudo-signed images like those from v.motion (signed mode) where 0 - 127 are really negative and 128 - 255 are positive. To convert an unsigned or pseudo-signed int8 stream to a signed int8 stream or vice versa, you can use a v.xor 128 object.
v.peek External v.peek reports the value of each component in the specified pixel or the values that make up a column or row of the stream.
v.presence External v.presence is used predominant to detect presence.
v.presence is used predominant to detect presence. It maintains a slowly adapting internal image that moves at a specified rate toward the pixel values in the input stream. The long exposure provides a reference image against which to compare the current frame. The difference usually represents a new presence in front of the camera. The rate at which the reference image adapts to the current incoming image is defined by a value between 0 and 1. At 0, the reference image is exactly the same as the live image. At 1, the reference image is frozen. Values between 0.99 and 0.9975 are most useful
v.sameness External single stream: output the similarity between a stream and comparison values dual stream: output the similarity between pixels in two streams
single stream: output the similarity between a stream and comparison values dual stream: output the similarity between pixels in two streams In single stream mode, v.sameness outputs the degree of similarity between each pixel’s components and the comparison values. It will output a maximum value of 255 when the pixel's components are exactly the same as the comparison value. As the pixel values get farther from the comparison value, the output levels decrease. Higher sensitivity levels result in less tolerance to difference. Each component of the stream is processed and output separately. U and V component outputs will range between 0 and 127. Dual stream mode operates in the same way but the comparison is between corresponding pixels in stream 1 and 2. The comparison values are ignored. In single stream mode, the incoming stream will be forced to 8-bit components before processing unless the incoming stream is a flo
v.samplehold External single stream: sample (pass) or hold (freeze) the incoming stream. dual stream: sample (pass) or hold (freeze) individual pixels based on the values of the pixels in a second stream.
single stream: sample (pass) or hold (freeze) the incoming stream. dual stream: sample (pass) or hold (freeze) individual pixels based on the values of the pixels in a second stream. v.samplehold passes the incoming pixels when the input is not 0 and the pixel values when the input is 0. The input can be from the second inlet or on a pixel by pixel basis from the pixels of a second stream. In single stream mode, if the v.samplehold is currently “holding”, then bangs received in the first inlet grab and hold a new frame.
v.saturation External single stream: set the saturation for a yuv stream dual stream: use the brightness of stream 2 to set the saturation for corresponding pixels in stream 1
single stream: set the saturation for a yuv stream dual stream: use the brightness of stream 2 to set the saturation for corresponding pixels in stream 1 In single stream mode, v.saturation adjusts the saturation of each pixel by a value. Streams with int16, int32 and float components are processed in their existing component sizes. In dual stream mode, v.saturation adjusts the saturation of each pixel in stream 1 by the brightness of the corresponding pixel in stream 2. The streams are both forced to int8 before processing. You can set additional gain, and define how the second streams values are interpreted using gain and modulation_center messages.
v.silhouette External v.silhouette looks for the top edges in an image.
v.silhouette looks for the top edges in an image. Top edges are edges that have some black space above them and have a brightness greater than the threshold. The optimal input stream for v.silhouette is a mix of v.edges and v.presence. Output pixels show the basis for the silhouette decision as a 0-127 image, with detected silhouettes having a value greater than 128. This object is designed to be a companion to v.heads.
v.status External v.status reports the current processor requirements of various parts of the softVNS system and allows performance tuning.
v.status reports the current processor requirements of various parts of the softVNS system and allows performance tuning. softVNS 2.1 calculates the total processing power required to process all streams at the currently set frame rate. If this amount is larger than the max_percent value, then softVNS ignores a certain number of frames per processed frame so that the actual processing percentage is less than max_percent. This effectively reduces the frame rate of the system. Note that if you are running a very complex MSP patch along with softVNS, and your softVNS streams are not running in overdrive or on quicktime interrupts (i.e. if you are using the v.movie object or v.dig in seq_grabber mode) the processing time reported may be more than it actually is, since other processes using interrupts will add their processing time to softVNS's.
v.sum External v.sum finds the sum of all the brightnesses in the image, and reports the sum as a single int.
v.sum finds the sum of all the brightnesses in the image, and reports the sum as a single int. All streams are translated to int8 before processing. Brightness values less than the noise threshold are ignored. Usually v.sum will be used to sum the results of an object like v.motion or v.presence or v.edges, when these objects are not in signed mode. Since in this case, most pixels will be zero, the sum is a useful measure of total motion, total presence or overall edginess.
v.wrap External This object shifts incoming values into a limited range in a few different ways.
This object shifts incoming values into a limited range in a few different ways. It takes one parameter (the limit point) and has two options: reflect and signed. The limit point parameter defines the limits of the possible output values. If signed is 1, the limits are ± the limit point. If signed is 0 the limits are 0 and the limit point. If reflect is 1, then when a limit point is surpassed, the output values reverse direction. If reflect is 0, then when a limit point is surpassed, the output value wraps to the opposite limit point. This behaviour is continuous across the zero point. That is to say that unlike a modulus operator or a remainder calculation, the behaviour is not inverted at the transition from negative to positive.
v001.clamp External Clamp color values to create streaks.
Clamp color values to create streaks.
v001.co3.alphamix External A/B/Alpha Mask mixer.
Mix Channels A and B according to the alpha values of C.
v001.co3.lumamix External A/B/Luma Mask mixer.
Mix Channels A and B according to the luma values of C.
page : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

Libraries
BulkStore
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='62'Tom Mays bulk storage memory device for all values (any message)
FuzzyLib
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='139'Alain Bonardi
Isis Truck
When manipulating human knowledge such as perception, feelings, appreciation, veracity of facts, etc., the classical logic that recognize only two truth degrees (true or false) is not always the most suitable.

To solve this problem, more than two degrees are considered in the non-classical logics. The fuzzy logic is one of these logics.

In this logic, facts are represented through membership functions: when the membership value is equal to 1 the fact is exactly true; when it is equal to 0 the fact is exactly false; in between there is an uncertainty about the veracity of the fact.

These membership functions are called "fuzzy subsets". They can be of different shapes: gaussian, trapezoidal, triangular, etc.

Thus the aim of the fuzzy logic is to propose a theoretical framework for the manipulation - representation and reasoning - of such facts.

The Fuzzy Lib library implements all the tools that are necessary to handle this manipulation: representation of a fuzzy subset (among them are the fuzzification, defuzzification and partitioning), reasoning process (generalized modus ponens, fuzzy implications, t-norms, t-conorms, etc.).

This version 1 of the Fuzzy Lib enables to implement fuzzification, uncertain reasoning and defuzzification for any number of data in the framework of Max/MSP environment.
Litter Power Pro Package
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='54'Peter Castine The Litter Power package consists of over 60 external objects, including a number of new MSP noise sources, externals that produce values from a wide variety of random number distributions, and externals for mutation and cross-synthesis.
p.jit.gl.tools
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='123' Pelado the p.jit.gl tools are designed to provide for easier learning of and experimenting with the many attributes that are available to jitter's gl objects by making them a whole lot more transparent and accessible. patches expose jitter gl object's attributes to interfaces that allow you to immediately edit and change an attribute's value. many of the parameters are attached to blines, which provide smooth changes while rendering, and all settings can be saved and recalled as presets using the pattrs that are embedded in the patches.
Panaiotis Objects
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='122' Panaiotis The Mac version is UB.

These Max objects have been enhanced since the documentation to the left was written. Help files for the objects provide information on enhancements.

The matrix object has been substantially upgraded. It now combines features of unpack, spray, funnel, append, and prepend into one object. This makes a great object to place between controllers and jit objects because it acts like a multi-prepend. There are new configuration commands and enhancements to the old: even, odd, mod,and range, among others). Most commands can be applied to inlets of outlets. There is also a mute function that adds another layer of control. Matrixctrl support has been enhanced. See the help file for full details and examples.

Most other objects now fully support floats. RCer and autocount will count in float values, not just integers.

Notegen16 is a 16 channel version of its predecessor: notegen. It is more generalized and much more efficient.
SFA Max/MSP Library
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='172'Stefano Fasciani The SFA-MaxLib is a collection of Max/MSP objects developed in the context of the VCI4DMI. It includes functions and utilities in the form of FTM externals, FTM abstractions and Max abstractions. FTM is a shared library for Max/MSP developed by IRCAM, which provides a small and simple real-time object system and a set of optimized services to be used within Max/MSP externals.

List of FTM Externals: sfa.eig - eigenvalues; sfa.inputcombinations - combination generator; sfa.levinson - levinson-durbin recursion; sfa.lpc2cep - lpc to cepstra conversion; sfa.rastafilt - rasta filter; sfa.rmd - relative mean difference; sfa.roots - polynomial roots;

List of Abstractions: sfa.bark.maxpat - energy of the Bark bands from time domain frame;sfa.bark2hz_vect.maxpat - Herts to Bark conversion;sfa.barkspect.maxpat - energy of the Bark bands from spectrum; sfa.ceil.maxpat - ceil function; sfa.featfluxgate.maxpat - gated distance on stream of feature vectors; sfa.fft2barkmx.maxpat - utility sub-abstraction of sfa.bark; sfa.fft2barkmxN.maxpat - utility sub-abstraction of sfa.barkspect; sfa.hynek_eq_coeff.maxpat - hynek equalization coefficients; sfa.hz2bark.maxpat - Hertz to Bark conversion; sfa.hz2bark_vect.maxpat - Hertz to Bark conversion for vectors; sfa.hz2mel.maxpat - Hertz to Mel conversion; sfa.idft_real_coeff.maxpat - utility sub-abstraction of sfa.rasta-plp; sfa.maxminmem.maxpat - minimum and maximum of a stream of data; sfa.mfcc.maxpat - MFCC coefficients; sfa.modalphafilter.maxpat - 1st order IIR lowpass on a stream of vectors; sfa.nonlinfeqscale.maxpat - linear spectrum to Bark or Mel scale conversion; sfa.rasta-plp.maxpat - PLP and RASTA-PLP coefficients; sfa.spectmoments.maxpat - 4 spectral moments (centroid, deviation, skewness, kurtosis); sfa.3spectmoments+flatness.maxpat - 3 spectral moments (centroid, deviation, skewness) and the spectral flatness; sfa.spectralflux.maxpat - spectral flux on stream of spectrum vectors; sfa.spectralfluxgate.maxpat - gated spectral flux on stream of spectrum vectors; sfa.std.maxpat - standard deviation; sfa.win_to_fft_size.maxpat - smaller FFT size given frame size; sfa.GCemulator.maxpat – 3D gestural controller emulator;
suivi
debug: SELECT prenom, nom FROM auteurs RIGHT JOIN auteur_libraries USING (id_auteur) WHERE auteur_libraries.id_library='88' Ircam Two externals performing score following on soloist performances using Hidden Markov Models (HMM)
Suivi is based on FTM and requires the shared library FTMlib for Max/MSP. Both externals use an FTM track object - a sequence of time-tagged FTM values - to store the score of the soloist performance to be followed. Notes, trills and other elements of the score are represented by FTM score objects (FTM scoob class). For the moment, scores can be imported from standard MIDI files only.
An editor for the FTM track class, which will also provide a graphical control interface for the score follower is under development as well as the import of MusicXML files.
The suivi object set is distributed within the IRCAM Forum.

4855 objects and 135 libraries within the database Last entries : December 23rd, 2023 Last comments : 0 0 visitor and 65322850 members connected RSS
Site under GNU Free Documentation License