)Uc @ sdZddlmZddlZddlZddlZddlZddlZddlZ e j Z ddl m Z ddl jZddljZddl mZdZdZdZd Zed Zd d Zd ZdZdZddeed ededZejjdejdej ddeed ededZ!ej ddeed ededZ"ej ddeedededZ#dZ$ej ddeed ededZ%dZ&ddeed e'e&e(dZ)dZ*dZ+d Z,d!Z-d"Z.d#Z/d d$Z0d%fd&YZ1d'd(d)d*d+fd,Z2d-Z3d d.Z4d/Z5d0d0d'd'd'd1Z6d2Z7d3d4Z8d5Z9d6Z:d7Z;eed8Z<d9Z=d:fd;YZ>d<Z?d=d>d?Z@d@eed ee(e jAdAZBddlZddlCZCeCjDdBZEdCZFdDZGdEZHdFZIdGZJdHZKddIZLeedJZMddKedLZNdd dMZOdNdOZPeCjDdPdQZQdRZRdSZSdTZTdUZUedVZVdWZWdXZXdYZYdZZZd[ed\d]d^Z[d_d'ed`Z\d@d d daeedbee(dc Z]ddfdeYZ^dfe^fdgYZ_dhe^fdiYZ`dje`fdkYZadle^fdmYZbdne^fdoYZcdpeafdqYZddreafdsYZedteafduYZfdve^fdwYZgdxegfdyYZhiece ji6ebe jj6ebe jk6ebe jl6eae jm6eae jn6e^e jo6e_e jp6ZqedzZrd{Zsed|d|ed}Ztdaedbee'd~ZuddZve(ev_we(dZxdZyedZzdZ{dZ|dZ}dZ~dZdZdZdPedZdZdZdZdS(sW Numerical python functions written for compatability with MATLAB commands with the same names. MATLAB compatible functions ------------------------------- :func:`cohere` Coherence (normalized cross spectral density) :func:`csd` Cross spectral density uing Welch's average periodogram :func:`detrend` Remove the mean or best fit line from an array :func:`find` Return the indices where some condition is true; numpy.nonzero is similar but more general. :func:`griddata` interpolate irregularly distributed data to a regular grid. :func:`prctile` find the percentiles of a sequence :func:`prepca` Principal Component Analysis :func:`psd` Power spectral density uing Welch's average periodogram :func:`rk4` A 4th order runge kutta integrator for 1D or ND systems :func:`specgram` Spectrogram (power spectral density over segments of time) Miscellaneous functions ------------------------- Functions that don't exist in MATLAB, but are useful anyway: :meth:`cohere_pairs` Coherence over all pairs. This is not a MATLAB function, but we compute coherence a lot in my lab, and we compute it for a lot of pairs. This function is optimized to do this efficiently by caching the direct FFTs. :meth:`rk4` A 4th order Runge-Kutta ODE integrator in case you ever find yourself stranded without scipy (and the far superior scipy.integrate tools) :meth:`contiguous_regions` return the indices of the regions spanned by some logical mask :meth:`cross_from_below` return the indices where a 1D array crosses a threshold from below :meth:`cross_from_above` return the indices where a 1D array crosses a threshold from above record array helper functions ------------------------------- A collection of helper methods for numpyrecord arrays .. _htmlonly: See :ref:`misc-examples-index` :meth:`rec2txt` pretty print a record array :meth:`rec2csv` store record array in CSV file :meth:`csv2rec` import record array from CSV file with type inspection :meth:`rec_append_fields` adds field(s)/array(s) to record array :meth:`rec_drop_fields` drop fields from record array :meth:`rec_join` join two record arrays on sequence of fields :meth:`recs_join` a simple join of multiple recarrays using a single column as a key :meth:`rec_groupby` summarize data by groups (similar to SQL GROUP BY) :meth:`rec_summarize` helper code to filter rec array fields into new fields For the rec viewer functions(e rec2csv), there are a bunch of Format objects you can pass into the functions that will do things like color negative values red, set percent formatting and scaling, etc. Example usage:: r = csv2rec('somefile.csv', checkrows=0) formatd = dict( weight = FormatFloat(2), change = FormatPercent(2), cost = FormatThousands(2), ) rec2excel(r, 'test.xls', formatd=formatd) rec2csv(r, 'test.csv', formatd=formatd) scroll = rec2gtk(r, formatd=formatd) win = gtk.Window() win.set_size_request(600,800) win.add(scroll) win.show_all() gtk.main() Deprecated functions --------------------- The following are deprecated; please import directly from numpy (with care--function signatures may differ): :meth:`load` load ASCII file - use numpy.loadtxt :meth:`save` save ASCII file - use numpy.savetxt i(tdivisionN(tverbose(t docstringcC s.tjtjtj|tj||S(N(tnptexptlinspacetlog(txmintxmaxtN((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytlogspacescC stjtj||S(sreturn sqrt(x dot x)(Rtsqrttdot(tx((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt_normscC stjt||S(s+return x times the hanning window of len(x)(Rthanningtlen(R ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytwindow_hanningscC s|S(s#No window function; simply return x((R ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt window_nonescC s<|dks|dkr"t|S|dkr8t|SdS(Ntconstanttlinear(tNonet detrend_meantdetrend_linear(R tkey((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytdetrends  icC stj|}|dks6|dks6|jdkrG||j|Stdg|j}tj||<||j||S(s0Return x minus its mean along the specified axisiiN(RtasarrayRtndimtmeantslicetnewaxis(R taxistind((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytdemeans ' cC s||jS(sReturn x minus the mean(x)(R(R ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRscC s|S(sReturn x: no detrending((R ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt detrend_nonescC srtjt|dtj}tj||dd}|d|d}|j||j}||||S(s2Return y minus best fit line; 'linear' detrending tdtypetbiasii(ii(ii(RtarangeRtfloat_tcovR(tyR tCtbta((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRs iitdefaultc  C s||k} tj|}| s3tj|}nt||krst|} tj||f}d|| )n| rt||krt|} tj||f}d|| )n|dkr|}n| dkrt} n|dkrtj|s |dkr|} d} n/|d kr=|dd} d} n td | r\| |} ntj |rt||kst |}n|tj |f|j }||}tj dt||d|}t|} tj| | ftj}xt| D]}||||||!}|||}tjj|d |}| ra|}nA||||||!}|||}tjj|d |}tj|| || |dd|f coherence vector for that pair. I.e., ``Cxy[(i,j) = cohere(X[:,i], X[:,j])``. Number of dictionary keys is ``len(ij)``. - *Phase*: dictionary of phases of the cross spectral density at each frequency for each pair. Keys are (*i*, *j*). - *freqs*: vector of frequencies, equal in length to either the coherence or phase vectors for any (*i*, *j*) key. Eg., to make a coherence Bode plot:: subplot(211) plot( freqs, Cxy[(12,19)]) subplot(212) plot( freqs, Phase[(12,19)]) For a large number of pairs, :func:`cohere_pairs` can be much more efficient than just calling :func:`cohere` for each pair, because it caches most of the intensive computations. If :math:`N` is the number of pairs, this function is :math:`O(N)` for most of the heavy lifting, whereas calling cohere for each pair is :math:`O(N^2)`. However, because of the caching, it is also more memory intensive, making 2 additional complex arrays with approximately the same number of elements as *X*. See :file:`test/cohere_pairs_test.py` in the src tree for an example script that shows that this :func:`cohere_pairs` and :func:`cohere` give the same results for a given pair. .. seealso:: :func:`psd` For information about the methods used to compute :math:`P_{xy}`, :math:`P_{xx}` and :math:`P_{yy}`. Niiis Cacheing FFTsR#i sComputing coherences(R[RR8R#tsettaddRR2R4R5R6R7R:tlinalgtnormR9R;R<R^RR=tarctan2timagRXR%($tXtijRARBRRCRDtpreferSpeedOverMemorytprogressCallbackt returnPxxtnumRowstnumColsttmpt allColumnsRNtjtNcolsRIRKR t numSlicest FFTSlicest FFTConjSlicesRYtslicestnormValtiColtSlicestiSlicet thisSliceRbtPhasetcountR RMRT((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt cohere_pairsstQ          "* -     *&cC stj||\}}|jtj}tj|tj|d}tj|t|}|d|d}dtj|t |t |}|S(s Return the entropy of the data in *y*. .. math:: \sum p_i \log_2(p_i) where :math:`p_i` is the probability of observing *y* in the :math:`i^{th}` bin of *bins*. *bins* can be a number of bins or a range of bins; see :func:`numpy.histogram`. Compare *S* with analytic calculation for a Gaussian:: x = mu + sigma * randn(200000) Sanalytic = 0.5 * ( 1.0 + log(2*pi*sigma**2.0) ) iig( Rt histogramtastypeR&ttaketnonzeroR^RR>R(R(tbinsR/tptdeltatS((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytentropys'cG sI|\}}dtjdtj|tjdd|||dS(sCReturn the normal pdf evaluated at *x*; args provides *mu*, *sigma*g?ig(RR tpiR(R Rdtmutsigma((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytnormpdfs c C st|}|ddkr,tddn|d|d}d||tj| d|dtj}tjtj|d|ttjd|dtg}|d|d}t| tjdt ||}tj j tj |||j tj} tj | |S(s@Returm the levy pdf evaluated at *x* for params *gamma*, *alpha*iis%x must be an event length array; try s/x = np.linspace(minx, maxx, N), where N is eveni( RR3RR%R&R@tintRR_RR;RR( R tgammatalphaR tdxR`R tdftcfltpx((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytlevypdfs  -&.cC stjtj|\}|S(s1Return the indices where ravel(condition) is true(RRtravel(t conditiontres((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytfindscC s>tj|}t|dkr.tjgS|dkjd}t|dkritjt|St|t|krtjgStjt|df|j}||dd+tj|}|dkjd}|dkjd}||t ||kjdd}tj||||}|S(s Return the indices of the longest stretch of contiguous ones in *x*, assuming *x* is a vector of zeros and ones. If there are two equally long stretches, pick the first. iiii( RRRtarrayRR%R8R#tdifftmax(R R R(tdiftuptdnRN((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytlongest_contiguous_oness   " (cC s t|S(s!alias for longest_contiguous_ones(R(R ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt longest_onessc C stjdtjj|\}}}|d|jd}|j}||}t||kj}|dd|fj} tj | |} | | ||fS(s WARNING: this function is deprecated -- please see class PCA instead Compute the principal components of *P*. *P* is a (*numVars*, *numObs*) array. *frac* is the minimum fraction of variance that a component must contain to be included. Return value is a tuple of the form (*Pcomponents*, *Trans*, *fracVar*) where: - *Pcomponents* : a (numVars, numObs) array - *Trans* : the weights matrix, ie, *Pcomponents* = *Trans* * *P* - *fracVar* : the fraction of the variance accounted for by each component returned A similar function of the same name was in the MATLAB R13 Neural Network Toolbox but is not found in later versions; its successor seems to be called "processpcs". s4This function is deprecated -- see class PCA insteadiiN( twarningstwarnRRhtsvdR[R>Rt transposeR ( tPtfractUtstvtvarEachttotVartfracVarR tTranst Pcomponents((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytprepcas   tPCAcB s5eZdZddZdZedZRS(c C s|j\}}||kr*tdn|||_|_|jdd|_|jdd|_|j|}||_ t j j |dt \}}}t j||jj}|dtt|}||j|_||_||_dS(s compute the SVD of a and store data for PCA. Use project to project the data onto a reduced set of dimensions Inputs: *a*: a numobservations x numdims array Attrs: *a* a centered unit sigma version of input a *numrows*, *numcols*: the dimensions of a *mu* : a numdims array of means of a *sigma* : a numdims array of atandard deviation of a *fracs* : the proportion of variance of each of the principal components *Wt* : the weight vector for projecting a numdims point or array into PCA space *Y* : a projected into PCA space The factor loadings are in the Wt factor, ie the factor loadings for the 1st principal component are given by Wt[0] s5we assume data in a is organized with numrows>numcolsRit full_matricesiN(R[t RuntimeErrortnumrowstnumcolsRRtstdRtcenterR+RRhRtFalseR tTR?RR>tfracstWttY( tselfR+R/tmRRtVhRtvars((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt__init__&s  ! gcC stj|}t|j}|jd|jkrJtd|jntj|j|j|j j }|j |k}|dkr|dd|f}n ||}|S(sWproject x onto the principle axes, dropping any axes where fraction of varianceg eDgTg2E5 ?g"PYM?gzQݿg= the 25th and < 50th percentile, ... and 3 indicates the value is above the 75th percentile cutoff. *p* is either an array of percentiles in [0..100] or a scalar which indicates how many quantiles of data you want ranked. gY@iiids/percentiles should be in range 0..100, not 0..1( R4R5RR%RRtminR3Rt searchsorted(R Rtptiles((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt prctile_ranks  6cC stj|tj}|rD||jdd|jdd}nR||jddddtjf}||jddddtjf}|S(s Return the matrix *M* with each row having zero mean and unit std. If *dim* = 1 operate on columns instead of rows. (*dim* is opposite to the numpy axis kwarg.) RiiN(RRR&RRR(tMtdim((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt center_matrixs )))c C s}yt|}Wn/tk rAtjt|ftj}n"Xtjt||ftj}||d`_ at mathworld. ii(RRR R( RlRtsigmaxtsigmaytmuxtmuytsigmaxytXmutYmutrhotztdenom((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytbivariate_normal$s  :*cC s1tj|j\}}||||||fS(s. *Z* and *Cond* are *M* x *N* matrices. *Z* are data and *Cond* is a boolean matrix where some condition is satisfied. Return value is (*x*, *y*, *z*) where *x* and *y* are the indices into *Z* and *z* are the values of *Z* at those indices. *x*, *y*, and *z* are 1D arrays. (RtindicesR[(tZtCondRlR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt get_xyz_where5sg?cC stj||fd}xmtt|||D]Q}tjjd|d}tjjd|d}tjj|||f`_. .. note:: What the function here calculates may not be what you really want; *caveat emptor*. It also seems that this function's name is badly misspelled. s6This does not belong in matplotlib and will be removed(RRtDeprecationWarningRRRR_(R tfprime((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt liaupunovst FIFOBuffercB sDeZdZdZdZdZdZdZdZRS(s A FIFO queue to hold incoming *x*, *y* data in a rotating buffer using numpy arrays under the hood. It is assumed that you will call asarrays much less frequently than you add data to the queue -- otherwise another data structure will be faster. This can be used to support plots where data is added from a real time feed and the plot object wants to grab data from the buffer and plot it to screen less freqeuently than the incoming. If you set the *dataLim* attr to :class:`~matplotlib.transforms.BBox` (eg :attr:`matplotlib.Axes.dataLim`), the *dataLim* will be updated as new data come in. TODO: add a grow method that will extend nmax .. note:: mlab seems like the wrong place for this class. cC stj|ftj|_tj|ftj|_tj|ftj|_tj|ftj|_d|_||_d|_ i|_ dS(s- Buffer up to *nmax* points. iN( RR8R&t_xat_yat_xst_yst_indt_nmaxRtdataLimt callbackd(Rtnmax((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRs   cC s |jj|gj|dS(sk Call *func* every time *N* events are passed; *func* signature is ``func(fifo)``. N(R(t setdefaulttappend(RtfuncR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytregisterscC s|jdk r=tj||fg}|jj|dn|j|j}||j|<||j| array. Works like :func:`map`, but it returns an array. This is just a convenient shorthand for ``numpy.array(map(...))``. (RRtmap(tfnRd((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytamapscC s#tjtjtj|dS(sP Return the root mean square of all the elements of *a*, flattened out. i(RR RR_(R+((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytrms_flatscC stjtj|S(s Return the *l1* norm of *a*, flattened out. Implemented as a separate function (not a call to :func:`norm` for speed). (RR>R_(R+((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytl1normscC s#tjtjtj|dS(s Return the *l2* norm of *a*, flattened out. Implemented as a separate function (not a call to :func:`norm` for speed). i(RR R>R_(R+((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytl2normscC sH|dkr"tjtj|Stjtj||d|SdS(s norm(a,p=2) -> l-p norm of a.flat Return the l-p norm of *a*, considered as a flat array. This is NOT a true matrix norm, since arrays of arbitrary rank are always flattened. *p* can be a number or the string 'Infinity' to get the L-infinity norm. tInfinityg?N(RtamaxR_R>(R+R((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt norm_flats cK s|jdd|ddk}|dkr?|d}d}n|dkrTd}ny&|d}||t||}Wn/tk rtt||||}nXtj|||S(s frange([start,] stop[, step, keywords]) -> array of floats Return a numpy ndarray containing a progression of floats. Similar to :func:`numpy.arange`, but defaults to a closed interval. ``frange(x0, x1)`` returns ``[x0, x0+1, x0+2, ..., x1]``; *start* defaults to 0, and the endpoint *is included*. This behavior is different from that of :func:`range` and :func:`numpy.arange`. This is deliberate, since :func:`frange` will probably be more useful for generating lists of points for function evaluation, and endpoints are often desired in this use. The usual behavior of :func:`range` can be obtained by setting the keyword *closed* = 0, in this case, :func:`frange` basically becomes :func:numpy.arange`. When *step* is given, it specifies the increment (or decrement). All arguments can be floating point numbers. ``frange(x0,x1,d)`` returns ``[x0,x0+d,x0+2d,...,xfin]`` where *xfin* <= *x1*. :func:`frange` can also be called with the keyword *npts*. This sets the number of points the list should contain (and overrides the value *step* might have been given). :func:`numpy.arange` doesn't offer this option. Examples:: >>> frange(3) array([ 0., 1., 2., 3.]) >>> frange(3,closed=0) array([ 0., 1., 2.]) >>> frange(1,6,2) array([1, 3, 5]) or 1,3,5,7, depending on floating point vagueries >>> frange(1,6.5,npts=5) array([ 1. , 2.375, 3.75 , 5.125, 6.5 ]) tclosediigg?tnptsN(R*RR?tKeyErrorRtroundRR%(txinitxfinRtkwtendpointRt((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytfrange(s)       "tlcC s`|dk r|}ntj|f||}x+t|D]}|f|}d||,<   t FormatObjcB s#eZdZdZdZRS(cC s |j|S(N(ttoval(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyttostr) scC s t|S(N(R,(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR>, scC s|S(N((RR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytfromstr/ s(RRR?R>R@(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR=( s  t FormatStringcB seZdZRS(cC st|}|dd!S(Nii(R(RR RN((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR?3 s (RRR?(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRA2 stFormatFormatStrcB seZdZdZRS(cC s ||_dS(N(RI(RRI((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR> scC s$|dkrdS|j|j|S(NR(RRIR>(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR?A s (RRRR?(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRB= s t FormatFloatcB s)eZdddZdZdZRS(ig?cC s*tj|d|||_||_dS(Ns%%1.%df(RBRt precisiontscale(RRDRE((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRI s cC s |dk r||j}n|S(N(RRE(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR>N s cC st||jS(N(R?RE(RR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR@S s(RRRR>R@(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRCH s t FormatIntcB s#eZdZdZdZRS(cC sdt|S(Ns%d(R(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR?Y scC s t|S(N(R(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR>\ scC s t|S(N(R(RR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR@_ s(RRR?R>R@(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRFW s  t FormatBoolcB seZdZdZRS(cC s t|S(N(R,(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR>e scC s t|S(N(tbool(RR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR@h s(RRR>R@(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRGb s t FormatPercentcB seZddZRS(icC stj||dddS(NREgY@(RCR(RRD((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRl s(RRR(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRIk stFormatThousandscB seZddZRS(icC stj||dddS(NREgMbP?(RCR(RRD((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRp s(RRR(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRJo stFormatMillionscB seZddZRS(icC stj||dddS(NREgư>(RCR(RRD((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRu s(RRR(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRKt st FormatDatecB s#eZdZdZdZRS(cC s ||_dS(N(RI(RRI((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRz scC s |dkrdS|j|jS(NR(RtstrftimeRI(RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR>} s cC s"ddl}|jj|jS(Ni(R&R(R)R(RR R3((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR@ s (RRRR>R@(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRLy s  tFormatDatetimecB seZddZdZRS(s%Y-%m-%d %H:%M:%ScC stj||dS(N(RLR(RRI((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR scC sddl}|jj|S(Ni(R&R(R)(RR R3((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR@ s (RRRR@(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRN s cC s|dkrt}nxmt|jjD]Y\}}|j|}|j|}|dkrztj|jt}n||| s(RcRR,tstr_tstring0RRRtitemsizeRtint16tint32tint64tint8tint_RjR?tfloat32tfloat64RERZR&(tcolnametcolumnRDtntypetlength(R(RIsB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt get_justify s<Z0Z9iic s|\}}}}|dkr9|dt|j|S|tkr^|t|}n%|tkr|t|}n|j|SdS(NiR=(R,tljustR?Rtrjust(Rtjust_pad_prec_spacertjusttpadtprectspacer(RY(sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRQ s RN(RRR4t is_numlikeRR#RRRTt __getitem__R:R+RGtrstriptostlinesep(R`theaderRRDRRjRNRftjustify_pad_prectjustify_pad_prec_spacerRnRoRptpjusttppadtpprecRQttextlRutcolitemRMttext((RYRsB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytrec2txt s:  H   BB cC s|d krt}nd}|jdkr?tdnt||}g}x@t|jjD],\} } |j|t || j qgWt j |ddt \} } tj| d|} |jj}|r| j|ng}x'|D]} |j|j| |qWt}t|rJ|d}t|d}nx|D]}|r||j|jj}}ntgt|}| jgt||||D]$\}}}}||||^qqQW| r| jnd S( s Save the data from numpy recarray *r* into a comma-/space-/tab-delimited file. The record array dtype names will be used for column headers. *fname*: can be a filename or a file handle. Support for gzipped files is automatic, if the filename ends in '.gz' *withheader*: if withheader is False, do not write the attribute names in the first row for formatd type FormatFloat, we override the precision to store full precision floats in the CSV file .. seealso:: :func:`csv2rec` For information about *missing* and *missingd*, which can be used to fill in masked values into your CSV file. c sfd}|S(Nc s|r |S|SdS(N((RNRtmval(R,(sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyR > s((R,R ((R,sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt with_mask= sis0rec2csv only operates on 1 dimensional recarraysR>t return_openedRJit _fieldmaskN(RRRR3RRRTR#RR+RVR?R4RSR1R*twritertwriterowRVRRRERRRR(R`RHRJRPRR t withheaderRR1RNRRKtopenedRRwtmvalstismaskedRMtrowmaskR,RNRR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytrec2csv" s:   $      Btnnc C s0y ddlm}m}t}Wn6tk rXddlj} ddlm}t}nXtj s|r|t j d|nt j d|tt_ n|j |j krt dn|j dkr|j d krt d nt|t|kot|kns$t d nt|d r|jj r|j|jtk}|j|jtk}|j}qn|r|d krtdn|j d kr|dddf}|dddf}n|jdd|jdtj|jtj}|jtj}|jtj}|jtj} |jtj} t| d| dd!dkst| d| dd!dkrtdntj| jd| jdftj} |j|||| | | n|j |j krt dn|j dkrA|j d krAt d n|j dkrktj||\}}n| j||} |d kr| j |}|||} nI|dkr|dddf|dddf}|dddf|dddf}tj!|j"j#}tj!|j"j#}|j$|j|ksl|j$|j|kr{tdn| j%|}||j|j$t&d|jd|j|j$t&d|jdf} n tdtj'tj(| r,tj)j*tj(| | } n| S(s ``zi = griddata(x,y,z,xi,yi)`` fits a surface of the form *z* = *f*(*x*, *y*) to the data in the (usually) nonuniformly spaced vectors (*x*, *y*, *z*). :func:`griddata` interpolates this surface at the points specified by (*xi*, *yi*) to produce *zi*. *xi* and *yi* must describe a regular grid, can be either 1D or 2D, but must be monotonically increasing. A masked array is returned if any grid points are outside convex hull defined by input data (no extrapolation is done). If interp keyword is set to '`nn`' (default), uses natural neighbor interpolation based on Delaunay triangulation. By default, this algorithm is provided by the :mod:`matplotlib.delaunay` package, written by Robert Kern. The triangulation algorithm in this package is known to fail on some nearly pathological cases. For this reason, a separate toolkit (:mod:`mpl_tookits.natgrid`) has been created that provides a more robust algorithm fof triangulation and interpolation. This toolkit is based on the NCAR natgrid library, which contains code that is not redistributable under a BSD-compatible license. When installed, this function will use the :mod:`mpl_toolkits.natgrid` algorithm, otherwise it will use the built-in :mod:`matplotlib.delaunay` package. If the interp keyword is set to '`linear`', then linear interpolation is used instead of natural neighbor. In this case, the output grid is assumed to be regular with a constant grid spacing in both the x and y directions. For regular grids with nonconstant grid spacing, you must use natural neighbor interpolation. Linear interpolation is only valid if :mod:`matplotlib.delaunay` package is used - :mod:`mpl_tookits.natgrid` only provides natural neighbor interpolation. The natgrid matplotlib toolkit can be downloaded from http://sourceforge.net/project/showfiles.php?group_id=80706&package_id=142792 i(t_natgridt __version__N(Rsusing natgrid version %ssusing delaunay version %ss=inputs xi and yi must have same number of dimensions (1 or 2)iis"inputs xi and yi must be 1D or 2D.s5inputs x,y,z must all be 1D arrays of the same lengthRRsRonly natural neighor interpolation allowed when using natgrid toolkit in griddata.itexttnuls8output grid defined by xi,yi must be monotone increasingRsAoutput grid must have constant spacing when using interp='linear'sinterp keyword must be one of 'linear' (for linear interpolation) or 'nn' (for natural neighbor interpolation). Default is 'nn'.(+tmpl_toolkits.natgridRRR1R1tmatplotlib.delaunaytdelaunayRtgriddatat _reportedRtreportRRRRERtcompresst compressedR3tsetitsetrRR+RR?RR:R[tnatgriddtmeshgridt Triangulationtnn_interpolatortfinfoR#t resolutionRtlinear_interpolatortcomplexR/Rtmat masked_where(R R(RtxityitinterpRRt _use_natgridRtxotyotzottriRtdytepsxtepsy((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyRg s%     .  B )  **8+2 !c C stj|r|g}ntj|}tj|}tj|}t|j}t||d x[-1]``, the routine tries an extrapolation. The relevance of the data obtained from this, of course, is questionable... Original implementation by Halldor Bjornsson, Icelandic Meteorolocial Office, March 2006 halldor at vedur.is Completely reworked and optimized for Python by Norbert Nemec, Institute of Theoretical Physics, University or Regensburg, April 2006 Norbert.Nemec at physik.uni-regensburg.de iiigN(RRR&R[R6RRRR8RRtchooseRtsignR`(RR R(RR RRRRRtsidxtxidxtyidxtxidxp1Rtdy1tdy2tdy1dy2((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytstineman_interp3 s4+    +cC s"tjtj||\}|S(s *points* is a sequence of *x*, *y* points. *verts* is a sequence of *x*, *y* vertices of a polygon. Return value is a sequence of indices into points for the points that are inside the polygon. (RRtnxutilstpoints_inside_poly(tpointstvertsR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt inside_poly scC stj|stj|r't}nt}|j|}|j|}t|}t|}||ksut||jd|}|jd|}|||*|||*|ddd||)||fS(s, Given a sequence of *xs* and *ys*, return the vertices of a polygon that has a horizontal base at *xmin* and an upper bound at the *ys*. *xmin* is a scalar. Intended for use with :meth:`matplotlib.axes.Axes.fill`, eg:: xv, yv = poly_below(0, x, y) ax.fill(xv, yv) iNi(Rt isMaskedArrayRRRR6R7(RtxstystnxtNxRR R(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt poly_below s      cC stj|s-tj|s-tj|r6t}nt}t|}tj|sm||j|}ntj|s||j|}n|j||dddf}|j||dddf}||fS(sC Given a sequence of *x*, *ylower* and *yupper*, return the polygon that fills the regions between them. *ylower* or *yupper* can be scalar or iterable. If they are iterable, they must be equal in length to *x*. Return value is *x*, *y* arrays for use with :meth:`matplotlib.axes.Axes.fill`. Ni(RRRRR4R5R7R@(R tylowertyupperRRR(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt poly_between s -  ""cC stj|d|dkS(s Tests whether first and last object in a sequence are the same. These are presumably coordinates on a polygonal curve, in which case this function tests if that curve is closed. ii(Rtall(Rl((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytis_closed_polygon scC sd}g}xdt|D]V\}}|dkr@|r@|}q|dk r| r|j||fd}qqW|dk r|j||dfn|S(s return a list of (ind0, ind1) such that mask[ind0:ind1].all() is True and we cover all such regions TODO: this is a pure python implementation which probably has a much faster numpy impl iN(RRTR+(Rt in_regiont boundariesRNRN((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytcontiguous_regions s   cC s\tj|}|}tj|d |k|d|k@d}t|rT|dS|SdS(s return the indices into *x* where *x* crosses some threshold from below, eg the i's where:: x[i-1]=threshold Example code:: import matplotlib.pyplot as plt t = np.arange(0.0, 2.0, 0.1) s = np.sin(2*np.pi*t) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(t, s, '-o') ax.axhline(0.5) ax.axhline(-0.5) ind = cross_from_below(s, 0.5) ax.vlines(t[ind], -1, 1) ind = cross_from_above(s, -0.5) ax.vlines(t[ind], -1, 1) plt.show() .. seealso:: :func:`cross_from_above` and :func:`contiguous_regions` iiiN(RRRR(R t thresholdR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytcross_from_below s !+ cC sVtj|}tj|d |k|d|k@d}t|rN|dS|SdS(s return the indices into *x* where *x* crosses some threshold from below, eg the i's where:: x[i-1]>threshold and x[i]<=threshold .. seealso:: :func:`cross_from_below` and :func:`contiguous_regions` iiiN(RRRR(R RR ((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytcross_from_above s + cC s.tj|}tj||d|d|S(s Finds the length of a set of vectors in *n* dimensions. This is like the :func:`numpy.norm` function for vectors, but has the ability to work over a particular axis of the supplied array or matrix. Computes ``(sum((x_i)^P))^(1/P)`` for each ``{x_i}`` being the elements of *X* along the given axis. If *axis* is *None*, compute over all elements of *X*. Rg?(RRR>(RlRR((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytvector_lengths. s cC s%tj|dd}t|ddS(s Computes the distance between a set of successive points in *N* dimensions. Where *X* is an *M* x *N* array or matrix. The distances between successive rows is computed. Distance is the standard Euclidean distance. Rii(RRR(Rl((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pytdistances_along_curve; scC s1t|}tjtjdtj|fS(s Computes the distance travelled along a polygonal curve in *N* dimensions. Where *X* is an *M* x *N* array or matrix. Returns an array of length *M* consisting of the distance along the curve at each point (i.e., the rows of *X*). i(RRR@R8tcumsum(Rl((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt path_lengthF s c C sf|d|||d||}}|d|||d||}} |||||| ||fS(s Converts a quadratic Bezier curve to a cubic approximation. The inputs are the *x* and *y* coordinates of the three control points of a quadratic curve, and the output is a tuple of *x* and *y* coordinates of the four control points of the cubic curve. g@g@g?gUUUUUU?gUUUUUU?gUUUUUU?gUUUUUU?(( tq0xtq0ytq1xtq1ytq2xtq2ytc1xtc1ytc2xtc2y((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyt quad2cubicQ s %%(R7t __future__RR*RRSRuRtnumpyRRt matplotlibRtmatplotlib.nxutilsRtmatplotlib.cbookR4RR RRRRRR!RR"RRUtinterpdtupdatetdedenttdedent_interpdRZRWR\R]RcReR1RRRRRRRRRRRRRRRRRRR RRRR R<ROR&RbRhRRfRgRiRlRmRnRoRrR{RRRRRRRRRRRRRRRR<R=RARBRCRFRGRIRJRKRLRNtbool_R_R`RaRdRetobject_RRORRRVRRRRRRRRRRRRRRRRRR(((sB/opt/alt/python27/lib64/python2.7/site-packages/matplotlib/mlab.pyts<            `9    ) (         %e*   G    a 3        D   "   / ~7                  s D u  ) ) ]    '