Loading search...

This document was generated with Documenter.jl on Sunday 19 April 2020 . Using Julia version 1.4.0.

DataFrames.jl

Welcome to the DataFrames.jl documentation!

This resource aims to teach you everything you need to know to get up and running with tabular data manipulation using the DataFrames.jl package.

For more illustrations of DataFrames.jl usage, in particular in conjunction with other packages you can check-out the following resources (they are kept up to date with the released version of DataFrames.jl):

  • Data Wrangling with DataFrames.jl Cheat Sheet
  • DataFrames Tutorial using Jupyter Notebooks
  • Julia Academy DataFrames.jl tutorial
  • JuliaCon 2019 , JuliaCon 2020 , JuliaCon 2021 , JuliaCon 2022 , PyData Global 2020 , and ODSC Europe 2021 tutorials
  • DataFrames.jl showcase

If you prefer to learn DataFrames.jl from a book you can consider reading:

  • Julia for Data Analysis ;
  • Julia Data Science .

What is DataFrames.jl?

DataFrames.jl provides a set of tools for working with tabular data in Julia. Its design and functionality are similar to those of pandas (in Python) and data.frame , data.table and dplyr (in R), making it a great general purpose data science tool.

DataFrames.jl plays a central role in the Julia Data ecosystem, and has tight integrations with a range of different libraries. DataFrames.jl isn't the only tool for working with tabular data in Julia – as noted below, there are some other great libraries for certain use-cases – but it provides great data wrangling functionality through a familiar interface.

To understand the toolchain in more detail, have a look at the tutorials in this manual. New users can start with the First Steps with DataFrames.jl section.

You may find the DataFramesMeta.jl package or one of the other convenience packages discussed in the Data manipulation frameworks section of this manual helpful when writing more advanced data transformations, especially if you do not have a significant programming experience. These packages provide convenience syntax similar to dplyr in R.

If you use metadata when working with DataFrames.jl you might find the TableMetadataTools.jl package useful. This package defines several convenience functions for performing typical metadata operations.

DataFrames.jl and the Julia Data Ecosystem

The Julia data ecosystem can be a difficult space for new users to navigate, in part because the Julia ecosystem tends to distribute functionality across different libraries more than some other languages. Because many people coming to DataFrames.jl are just starting to explore the Julia data ecosystem, below is a list of well-supported libraries that provide different data science tools, along with a few notes about what makes each library special, and how well integrated they are with DataFrames.jl.

  • StatsKit.jl : A convenience meta-package which loads a set of essential packages for statistics, including those mentioned below in this section and DataFrames.jl itself.
  • Statistics : The Julia standard library comes with a wide range of statistics functionality, but to gain access to these functions you must call using Statistics .
  • LinearAlgebra : Like Statistics , many linear algebra features (factorizations, inversions, etc.) live in a library you have to load to use.
  • SparseArrays are also in the standard library but must be loaded to be used.
  • FreqTables.jl : Create frequency tables / cross-tabulations. Tightly integrated with DataFrames.jl.
  • HypothesisTests.jl : A range of hypothesis testing tools.
  • GLM.jl : Tools for estimating linear and generalized linear models. Tightly integrated with DataFrames.jl.
  • StatsModels.jl : For converting heterogeneous DataFrame into homogeneous matrices for use with linear algebra libraries or machine learning applications that don't directly support DataFrame s. Will do things like convert categorical variables into indicators/one-hot-encodings, create interaction terms, etc.
  • MultivariateStats.jl : linear regression, ridge regression, PCA, component analyses tools. Not well integrated with DataFrames.jl, but easily used in combination with StatsModels .
  • MLJ.jl : if you're more of an applied user, there is a single package the pulls from all these different libraries and provides a single, scikit-learn inspired API: MLJ.jl. MLJ.jl provides a common interface for a wide range of machine learning algorithms.
  • ScikitLearn.jl : A Julia wrapper around the full Python scikit-learn machine learning library. Not well integrated with DataFrames.jl, but can be combined using StatsModels.jl.
  • AutoMLPipeline : A package that makes it trivial to create complex ML pipeline structures using simple expressions. It leverages on the built-in macro programming features of Julia to symbolically process, manipulate pipeline expressions, and makes it easy to discover optimal structures for machine learning regression and classification.
  • Deep learning: KNet.jl and Flux.jl .
  • Plots.jl : Powerful, modern plotting library with a syntax akin to that of matplotlib (in Python) or plot (in R). StatsPlots.jl provides Plots.jl with recipes for many standard statistical plots.
  • Gadfly.jl : High-level plotting library with a "grammar of graphics" syntax akin to that of ggplot (in R).
  • AlgebraOfGraphics.jl : A "grammar of graphics" library build upon Makie.jl .
  • VegaLite.jl : High-level plotting library that uses a different "grammar of graphics" syntax and has an emphasis on interactive graphics.
  • Impute.jl : various methods for handling missing data in vectors, matrices and tables.
  • DataFramesMeta.jl : A range of convenience functions for DataFrames.jl that augment select and transform to provide a user experience similar to that provided by dplyr in R.
  • DataFrameMacros.jl : Provides macro versions of the common DataFrames.jl functions similar to DataFramesMeta.jl, with convenient syntax for the manipulation of multiple columns at once.
  • Query.jl : Query.jl provides a single framework for data wrangling that works with a range of libraries, including DataFrames.jl, other tabular data libraries (more on those below), and even non-tabular data. Provides many convenience functions analogous to those in dplyr in R or LINQ .
  • You can find more information on these packages in the Data manipulation frameworks section of this manual.
  • Graphs.jl : A pure-Julia, high performance network analysis library. Edgelists in DataFrame s can be easily converted into graphs using the GraphDataFrameBridge.jl package.
  • CSV files (using CSV.jl ),
  • Apache Arrow (using Arrow.jl )
  • reading Stata, SAS and SPSS files (using ReadStatTables.jl ; alternatively Queryverse users can choose StatFiles.jl ),
  • Parquet files (using Parquet2.jl ),
  • reading R data files (.rda, .RData) (using RData.jl ).

While not all of these libraries are tightly integrated with DataFrames.jl, because DataFrame s are essentially collections of aligned Julia vectors, so it is easy to (a) pull out a vector for use with a non-DataFrames-integrated library, or (b) convert your table into a homogeneously-typed matrix using the Matrix constructor or StatsModels.jl.

Other Julia Tabular Libraries

DataFrames.jl is a great general purpose tool for data manipulation and wrangling, but it's not ideal for all applications. For users with more specialized needs, consider using:

  • TypedTables.jl : Type-stable heterogeneous tables. Useful for improved performance when the structure of your table is relatively stable and does not feature thousands of columns.
  • JuliaDB.jl : For users working with data that is too large to fit in memory, we suggest JuliaDB.jl, which offers better performance for large datasets, and can handle out-of-core data manipulations (Python users can think of JuliaDB.jl as the Julia version of dask ).

Note that most tabular data libraries in the Julia ecosystem (including DataFrames.jl) support a common interface (defined in the Tables.jl package). As a result, some libraries are capable or working with a range of tabular data structures, making it easy to move between tabular libraries as your needs change. A user of Query.jl , for example, can use the same code to manipulate data in a DataFrame , a Table (defined by TypedTables.jl), or a JuliaDB table.

If there is something you expect DataFrames to be capable of, but cannot figure out how to do, please reach out with questions in Domains/Data on Discourse . Additionally you might want to listen to an introduction to DataFrames.jl on JuliaAcademy .

Please report bugs by opening an issue .

You can follow the source links throughout the documentation to jump right to the source files on GitHub to make pull requests for improving the documentation and function capabilities.

Please review DataFrames contributing guidelines before submitting your first PR!

Information on specific versions can be found on the Release page .

Package Manual

  • First Steps with DataFrames.jl
  • Setting up the Environment
  • Constructors and Basic Utility Functions
  • Getting and Setting Data in a Data Frame
  • Basic Usage of Transformation Functions
  • Getting Started
  • Installation
  • The DataFrame Type
  • Database-Style Joins
  • Introduction to joins
  • Key value comparisons and floating point values
  • Joining on key columns with different names
  • Handling of duplicate keys and tracking source data frame
  • Renaming joined columns
  • Matching missing values in joins
  • Specifying row order in the join result
  • In-place left join
  • The Split-Apply-Combine Strategy
  • Design of the split-apply-combine support
  • Examples of the split-apply-combine operations
  • Using GroupedDataFrame as an iterable and indexable object
  • Simulating the SQL where clause
  • Column-independent operations
  • Column-independent operations versus functions
  • Specifying group order in groupby
  • Reshaping and Pivoting Data
  • Categorical Data
  • Missing Data
  • Comparisons
  • Comparison with the Python package pandas
  • Comparison with the R package dplyr
  • Comparison with the R package data.table
  • Comparison with Stata (version 8 and above)
  • Data manipulation frameworks
  • DataFramesMeta.jl
  • DataFrameMacros.jl

Only exported (i.e. available for use without DataFrames. qualifier after loading the DataFrames.jl package with using DataFrames ) types and functions are considered a part of the public API of the DataFrames.jl package. In general all such objects are documented in this manual (in case some documentation is missing please kindly report an issue here ).

Breaking changes to public and documented API are avoided in DataFrames.jl where possible.

The following changes are not considered breaking:

  • specific floating point values computed by operations may change at any time; users should rely only on approximate accuracy;
  • in functions that use the default random number generator provided by Base Julia the specific random numbers computed may change across Julia versions;
  • if the changed functionality is classified as a bug;
  • in its implementation some function accepted a wider range of arguments that it was documented to handle - changes in handling of undocumented arguments are not considered as breaking;
  • the type of the value returned by a function changes, but it still follows the contract specified in the documentation; for example if a function is documented to return a vector then changing its type from Vector to PooledVector is not considered as breaking;
  • error behavior: code that threw an exception can change exception type thrown or stop throwing an exception;
  • changes in display (how objects are printed);
  • changes to the state of global objects from Base Julia whose state normally is considered volatile (e.g. state of global random number generator).

All types and functions that are part of public API are guaranteed to go through a deprecation period before a breaking change is made to them or they would be removed.

The standard practice is that breaking changes are implemented when a major release of DataFrames.jl is made (e.g. functionalities deprecated in a 1.x release would be changed in the 2.0 release).

In rare cases a breaking change might be introduced in a minor release. In such a case the changed behavior still goes through one minor release during which it is deprecated. The situations where such a breaking change might be allowed are (still such breaking changes will be avoided if possible):

  • the affected functionality was previously clearly identified in the documentation as being subject to changes (for example in DataFrames.jl 1.4 release propagation rules of :note -style metadata are documented as such);
  • the change is on the border of being classified as a bug (in rare cases even if a behavior of some function was documented its consequences for certain argument combinations could be decided to be unintended and not wanted);
  • the change is needed to adjust DataFrames.jl functionality to changes in Base Julia.

Please be warned that while Julia allows you to access internal functions or types of DataFrames.jl these can change without warning between versions of DataFrames.jl. In particular it is not safe to directly access fields of types that are a part of public API of the DataFrames.jl package using e.g. the getfield function. Whenever some operation on fields of defined types is considered allowed an appropriate exported function should be used instead.

  • Type hierarchy design
  • The design of handling of columns of a DataFrame
  • Types specification
  • Multithreading support
  • Constructing data frames
  • Summary information
  • Working with column names
  • Mutating and transforming data frames and grouped data frames
  • Reshaping data frames between tall and wide formats
  • Filtering rows
  • Working with missing values
  • General rules
  • getindex and view
  • Broadcasting
  • Indexing GroupedDataFrame s
  • Common API for types defined in DataFrames.jl
  • DataFrames.AbstractDataFrame
  • DataFrames.AsTable
  • DataFrames.DataFrame
  • DataFrames.DataFrameColumns
  • DataFrames.DataFrameRow
  • DataFrames.DataFrameRows
  • DataFrames.GroupKey
  • DataFrames.GroupKeys
  • DataFrames.GroupedDataFrame
  • DataFrames.RepeatedVector
  • DataFrames.StackedVector
  • DataFrames.SubDataFrame
  • Base.Iterators.only
  • Base.Iterators.partition
  • Base.allunique
  • Base.append!
  • Base.deleteat!
  • Base.eachcol
  • Base.eachrow
  • Base.empty!
  • Base.filter
  • Base.filter!
  • Base.insert!
  • Base.invpermute!
  • Base.isapprox
  • Base.isempty
  • Base.issorted
  • Base.keepat!
  • Base.length
  • Base.parent
  • Base.permute!
  • Base.permutedims
  • Base.popat!
  • Base.popfirst!
  • Base.prepend!
  • Base.propertynames
  • Base.pushfirst!
  • Base.reduce
  • Base.repeat
  • Base.resize!
  • Base.reverse
  • Base.reverse!
  • Base.similar
  • Base.sortperm
  • Base.unique
  • Base.unique!
  • Base.values
  • DataAPI.allcombinations
  • DataAPI.antijoin
  • DataAPI.colmetadata
  • DataAPI.colmetadata!
  • DataAPI.colmetadatakeys
  • DataAPI.crossjoin
  • DataAPI.deletecolmetadata!
  • DataAPI.deletemetadata!
  • DataAPI.describe
  • DataAPI.emptycolmetadata!
  • DataAPI.emptymetadata!
  • DataAPI.innerjoin
  • DataAPI.leftjoin
  • DataAPI.metadata
  • DataAPI.metadata!
  • DataAPI.metadatakeys
  • DataAPI.ncol
  • DataAPI.nrow
  • DataAPI.outerjoin
  • DataAPI.rightjoin
  • DataAPI.rownumber
  • DataAPI.semijoin
  • DataFrames.allowmissing!
  • DataFrames.combine
  • DataFrames.completecases
  • DataFrames.disallowmissing!
  • DataFrames.dropmissing
  • DataFrames.dropmissing!
  • DataFrames.fillcombinations
  • DataFrames.flatten
  • DataFrames.groupby
  • DataFrames.groupcols
  • DataFrames.groupindices
  • DataFrames.insertcols
  • DataFrames.insertcols!
  • DataFrames.leftjoin!
  • DataFrames.mapcols
  • DataFrames.mapcols!
  • DataFrames.nonunique
  • DataFrames.order
  • DataFrames.proprow
  • DataFrames.rename
  • DataFrames.rename!
  • DataFrames.repeat!
  • DataFrames.select
  • DataFrames.select!
  • DataFrames.subset
  • DataFrames.subset!
  • DataFrames.table_transformation
  • DataFrames.transform
  • DataFrames.transform!
  • DataFrames.unstack
  • DataFrames.valuecols
  • Missings.allowmissing
  • Missings.disallowmissing
  • Random.shuffle
  • Random.shuffle!

Theme documenter-light documenter-dark

This document was generated with Documenter.jl version 0.27.25 on Saturday 22 July 2023 . Using Julia version 1.9.2.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Issues: queryverse/DataVoyager.jl

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues list

  • By logging in you accept our terms of service and privacy policy

DataVoyager Tag

Tag toggle dropdown.

The Tidelift Subscription provides access to a continuously curated stream of human-researched and maintainer-verified data on open source packages and their licenses, releases, vulnerabilities, and development practices.

Something wrong with this page? Make a suggestion

Export .ABOUT file for this package

Last synced: 2024-06-02 01:05:01 UTC

DataViewer.jl supported data file formats

I was interested in the recent post on DataViewer.jl and wondered if it could be used with csv files or whether it requires hf5/jdl2/json datafiles?

Why not give it a try?

An alternative would be Queryverse. However I am not sure how up to date this is Queryverse | Queryverse, Julia packages for data science.

Try it out, and if it doesn’t work create an issue at Issues · triscale-innov/DataViewer.jl · GitHub …

See also the package section on extending DataViewer to support more data formats .

netcdf support would be very nice for geoscience community.

It should all work. The underlying Javascript Data Voyager hasn’t seen any updates in a long time though, not clear to me whether that particular project is still alive or not… But the existing functionality should just work.

Thanks for your interest! For now, DataViewer only supports HDF5 , JLD2 and JSON data files. But more fundamentally, it was designed with tree-like data structures in mind (think things like dictionaries of dictionaries of arrays of dictionaries).

It would probably not be difficult to add support for tabular/columnar file formats (like CSV, or Arrow which was mentioned in the other thread), but I’m not sure how we’d want to display them:

  • like a dictionary of array-like columns? → this would be already supported, probably the best option for now
  • like a vector of dict-like rows? → this would also already be possible, but would it be useful?
  • like a spreadsheet? → this does not currently exist in DataViewer and would probably be a bit more work to implement

But (and here I might very well be wrong because I almost never work with such data), I’m under the impression that there already exist lots of tools which would be more suited to flat, tabular data. For example, in the QueryVerse (which has already been mentioned in this thread) I would expect the DataVoyager UI or the ElectronDisplay “table display” feature to be particularly useful with columnar data coming from CSV files.

bodeplot

One slightly off topic response from me. I am surprised that the .jld2 and .hdf5 file extension is being looked for. Surely the file type can be found in the header?

Sorry if I am exposing my ignorance here…

Sure enough, FileTypes.jl does not cover these file types GitHub - JuliaIO/FileTypes.jl: Small and dependency-free Julia package to infer file and MIME type checking the magic numbers signature.

Thanks for all the resonses and for being pointed to Queryverse which may suit me better.

That’s a very good point!

Lack of time was the main reason I did not do this, but I did consider it at one point, which lead me to the following remarks:

  • I’m not sure whether it would be possible to reliably auto-detect text-based formats like JSON (or CSV , for that matter), so a file-extension-based mechanism might be needed anyway.
  • Since the JLD2 format is itself based on HDF5 , I’m not sure a magic-number-based approach à la FileTypes.jl could reliably distinguish between the two. It might be possible to implement a two-stage approach, though: determine that the file is an HDF5 container first, then look in it for specific meta-data signalling a JLD2 file.

Is that what your file contains: TransferFunction instances? (In what type of file?)

One problem I see is that, in order to handle those, DataViewer would have to know about the TransferFunction type. That could probably be a nice extension that depends both on ControlSystemsBase and DataViewer .

Yes, indeed. Well, can be StateSpace or TransferFunction, they can easily be converted into each other. I have lots of these files (simple example):

JLD2 files have custom magic bytes at the beginning. ( Here’s JLD2’s own header verification)

If required, I can help with JLD2 specific things.

EDIT: also, FileIO can already correctly identify JLD2 / HDF5

Note, that netcdf files are also just HDF5 with extra metadata strapped on it.

JLD2 can read netcdf files already, with the caveat that using the metadata is not implemented.

Very good to know, thanks!

I guess I still need the extension-based mechanism for things like JSON , but still a very nice improvement for everything else! I probably won’t have any time to work on this soon, but filed an issue to remember about it

DataLoaders.jl

Documentation (latest)

A Julia package implementing performant data loading for deep learning on out-of-memory datasets that. Works like PyTorch's DataLoader .

What does it do?

  • Uses multi-threading to load data in parallel while keeping the primary thread free for the training loop
  • Handles batching and collating
  • Is simple to extend for custom datasets
  • Integrates well with other packages in the ecosystem
  • Allows for inplace loading to reduce memory load

When should you use it?

  • You have a dataset that does not fit into memory
  • You want to reduce the time your training loop is waiting for the next batch of data

How do you use it?

Install like any other Julia package using the package manager (see setup ):

After installation, import it, create a DataLoader from a dataset and batch size, and iterate over it:

Next, you may want to read

  • What datasets you can use it with
  • How it compares to PyTorch's data loader

Required Packages

  • AbstractTrees
  • ChainRulesCore
  • ChangesOfVariables
  • CodeTracking
  • ColorSchemes
  • ColorVectorSpace
  • ConcurrentUtilities
  • DataStructures
  • DelimitedFiles
  • DocStringExtensions
  • FixedPointNumbers
  • FlameGraphs
  • FoldingTrees
  • IndirectArrays
  • InverseFunctions
  • IrrationalConstants
  • JuliaInterpreter
  • JuliaSyntax
  • LaTeXStrings
  • LeftChildRightSiblingTrees
  • LogExpFunctions
  • MLDataPattern
  • ThreadPools

Used By Packages

  • DLPipelines

Julia Packages

This website serves as a package browsing tool for the Julia programming language. It works by aggregating various sources on Github to help you find your next package.

By analogy, Julia Packages operates much like PyPI , Ember Observer , and Ruby Toolbox do for their respective stacks.

COMMENTS

  1. DataVoyager · Julia Packages

    In this example the data is streamed directly into voyager and at no point is any DataFrame allocated. Extracting plots. You can also access a plot that you have created in the voyager UI from julia, for example to save the plot to disc. You can access the currently active plot in a given voyager window v with the brackets syntax:

  2. GitHub

    Exploring data. You create a new voyager window by calling Voyager: using DataVoyager. v = Voyager () By itself this is not very useful, the next step is to load some data into voyager. Lets assume your data is in a DataFrame: using DataFrames, DataVoyager. data = DataFrame (a =rand ( 100 ), b =randn ( 100 ))

  3. Alternative to Data Voyager for Julia

    It is the only piece of software I have a problem with like that. If this problem wasn't there, your Data Voyager is from what I can tell obviously a superior choice. It may have to do with the original base (out of your direct control) that you use for it, as you have mentioned in one of your posts on Github.

  4. Introduction · DataVoyager.jl

    This document was generated with Documenter.jl on Sunday 19 April 2020.Using Julia version 1.4.0.

  5. Releases · queryverse/DataVoyager.jl · GitHub

    Julia wrapper for the Voyager data exploration tool - Releases · queryverse/DataVoyager.jl

  6. Introduction

    Voyager is a data visualization tool that blends manual and automatic chart specification in a unified system. It aims to support smoother gradations between open-ended exploration and more focused phases of analysis. Voyager augments a traditional drag and drop chart specification interface with two new partial view specification techniques.

  7. [02x01] Julia; VS Code; Iris Flower Data Set; DataVoyager

    Use the Julia Programming Language for Data Analysis and Data Visualization.Start by installing and configuring VS Code as well as the Julia Extension for VS...

  8. Prototyping Visualizations for the Web with Vega and Julia

    Data Voyager is a great tool in the Vega ecosystem - it allows for visual exploration of data as well as easy prototyping of visualizations. It is where I always start my data exploration and ...

  9. How to make DataVoyager recognize a variable as temporal?

    LinusSch February 10, 2021, 11:13am 1. In David Anthoff's "Intro to the Queryverse", at 9:30 ( Intro to the Queryverse, a Julia data science stack | David Anthoff - YouTube ), he passes a dataset to voyager() (from DataVoyager.jl) and one of the variables is recognized as a measure of time. I wonder why, what about the data tells voyager ...

  10. GitHub

    Julia wrapper for the Voyager data exploration tool - GitHub - queryverse/DataVoyager.jl: Julia wrapper for the Voyager data exploration tool

  11. GitHub

    Voyager 2 is a data exploration tool that blends manual and automated chart specification. Voyager 2 combines PoleStar, a traditional chart specification tool inspired by Tableau and Polaris (research project that led to the birth of Tableau), with two partial chart specification interfaces: (1) wildcards let users specify multiple charts in parallel,(2) related views suggest visualizations ...

  12. DataVoyager question

    Specific Domains Visualization. johnh June 28, 2020, 7:59pm 1. I have spent far too long today downloading an Excel format file containing the HPC Top500 data for this month. With DataVoyager I try to load a dataframe which has bene read in from this file. 1612×1270 73.6 KB.

  13. Search · DataVoyager.jl

    This document was generated with Documenter.jl on Sunday 19 April 2020.Using Julia version 1.4.0.

  14. DataVoyager.jl download

    This package provides Julia integration for the Voyager data exploration tool. DataVoyager.jl can be used for data exploration. It can help you visualize and understand any data that is in a tabular format.

  15. Introduction · DataFrames.jl

    The Julia data ecosystem can be a difficult space for new users to navigate, in part because the Julia ecosystem tends to distribute functionality across different libraries more than some other languages. Because many people coming to DataFrames.jl are just starting to explore the Julia data ecosystem, below is a list of well-supported ...

  16. Getting Started · DataTables.jl

    While the DataTables package provides basic data manipulation capabilities, users are encouraged to use the following packages for more powerful and complete data querying functionality in the spirit of dplyr and LINQ: Query.jl provides a LINQ like interface to a large number of data sources, including DataTable instances. Previous

  17. Issues · queryverse/DataVoyager.jl · GitHub

    enhancement. #1 opened on Dec 31, 2017 by davidanthoff Backlog. ProTip! Follow long discussions with comments:>50 . Julia wrapper for the Voyager data exploration tool - Issues · queryverse/DataVoyager.jl.

  18. Julia Packages

    Julia implementation of Data structures HTTP.jl 592 HTTP for Julia JET.jl 571 An experimental code analyzer for Julia. No need for additional type annotations. Cthulhu.jl 456 The slow descent into madness JSON.jl 290 JSON parsing and printing ...

  19. DataVoyager on Julia

    Julia. SourceRank 1. Dependent repositories 1 Total tags 0 Latest tag about 12 hours ago First tag Jan 11, 2018. Something wrong with this page? Make a suggestion. Export .ABOUT file for this package ... Data is available under CC-BY-SA 4.0 license Explore. Platforms; Languages; Licenses;

  20. Julia Packages

    Abstract julia interfaces for working with trees BitFlags.jl 12 BitFlag.jl provides an Enum-like type for bit flag option values. CodecZlib.jl 46 Zlib codecs for TranscodingStreams.jl. CodeTracking.jl 113 It's editing-time, do you know where your methods are? ...

  21. DataViewer.jl supported data file formats

    Thanks for your interest! For now, DataViewer only supports HDF5, JLD2 and JSON data files. But more fundamentally, it was designed with tree-like data structures in mind (think things like dictionaries of dictionaries of arrays of dictionaries). It would probably not be difficult to add support for tabular/columnar file formats (like CSV, or ...

  22. Julia Packages

    Julia implementation of Data structures URIParser.jl 16 Uniform Resource Identifier (URI) parser in Julia FixedPointNumbers.jl 65 Fixed point types for julia Reexport.jl 130 Julia macro for re-exporting one module from another ... Julia macro for re-exporting one module from another

  23. DataLoaders · Julia Packages

    Install like any other Julia package using the package manager (see setup ): ]add DataLoaders. After installation, import it, create a DataLoader from a dataset and batch size, and iterate over it: using DataLoaders. # 10.000 observations of inputs with 128 features and one target feature. data = ( rand ( 128, 10000 ), rand ( 1, 10000 ))