Blog

  • jax

    logo

    Transformable numerical computing at scale

    Continuous integration PyPI version

    Transformations | Scaling | Install guide | Change logs | Reference docs

    What is JAX?

    JAX is a Python library for accelerator-oriented array computation and program transformation, designed for high-performance numerical computing and large-scale machine learning.

    JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via jax.grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.

    JAX uses XLA to compile and scale your NumPy programs on TPUs, GPUs, and other hardware accelerators. You can compile your own pure functions with jax.jit. Compilation and automatic differentiation can be composed arbitrarily.

    Dig a little deeper, and you’ll see that JAX is really an extensible system for composable function transformations at scale.

    This is a research project, not an official Google product. Expect sharp edges. Please help by trying it out, reporting bugs, and letting us know what you think!

    import jax
    import jax.numpy as jnp
    
    def predict(params, inputs):
      for W, b in params:
        outputs = jnp.dot(inputs, W) + b
        inputs = jnp.tanh(outputs)  # inputs to the next layer
      return outputs                # no activation on last layer
    
    def loss(params, inputs, targets):
      preds = predict(params, inputs)
      return jnp.sum((preds - targets)**2)
    
    grad_loss = jax.jit(jax.grad(loss))  # compiled gradient evaluation function
    perex_grads = jax.jit(jax.vmap(grad_loss, in_axes=(None, 0, 0)))  # fast per-example grads

    Contents

    Transformations

    At its core, JAX is an extensible system for transforming numerical functions. Here are three: jax.grad, jax.jit, and jax.vmap.

    Automatic differentiation with grad

    Use jax.grad to efficiently compute reverse-mode gradients:

    import jax
    import jax.numpy as jnp
    
    def tanh(x):
      y = jnp.exp(-2.0 * x)
      return (1.0 - y) / (1.0 + y)
    
    grad_tanh = jax.grad(tanh)
    print(grad_tanh(1.0))
    # prints 0.4199743

    You can differentiate to any order with grad:

    print(jax.grad(jax.grad(jax.grad(tanh)))(1.0))
    # prints 0.62162673

    You’re free to use differentiation with Python control flow:

    def abs_val(x):
      if x > 0:
        return x
      else:
        return -x
    
    abs_val_grad = jax.grad(abs_val)
    print(abs_val_grad(1.0))   # prints 1.0
    print(abs_val_grad(-1.0))  # prints -1.0 (abs_val is re-evaluated)

    See the JAX Autodiff Cookbook and the reference docs on automatic differentiation for more.

    Compilation with jit

    Use XLA to compile your functions end-to-end with jit, used either as an @jit decorator or as a higher-order function.

    import jax
    import jax.numpy as jnp
    
    def slow_f(x):
      # Element-wise ops see a large benefit from fusion
      return x * x + x * 2.0
    
    x = jnp.ones((5000, 5000))
    fast_f = jax.jit(slow_f)
    %timeit -n10 -r3 fast_f(x)
    %timeit -n10 -r3 slow_f(x)

    Using jax.jit constrains the kind of Python control flow the function can use; see the tutorial on Control Flow and Logical Operators with JIT for more.

    Auto-vectorization with vmap

    vmap maps a function along array axes. But instead of just looping over function applications, it pushes the loop down onto the function’s primitive operations, e.g. turning matrix-vector multiplies into matrix-matrix multiplies for better performance.

    Using vmap can save you from having to carry around batch dimensions in your code:

    import jax
    import jax.numpy as jnp
    
    def l1_distance(x, y):
      assert x.ndim == y.ndim == 1  # only works on 1D inputs
      return jnp.sum(jnp.abs(x - y))
    
    def pairwise_distances(dist1D, xs):
      return jax.vmap(jax.vmap(dist1D, (0, None)), (None, 0))(xs, xs)
    
    xs = jax.random.normal(jax.random.key(0), (100, 3))
    dists = pairwise_distances(l1_distance, xs)
    dists.shape  # (100, 100)

    By composing jax.vmap with jax.grad and jax.jit, we can get efficient Jacobian matrices, or per-example gradients:

    per_example_grads = jax.jit(jax.vmap(jax.grad(loss), in_axes=(None, 0, 0)))

    Scaling

    To scale your computations across thousands of devices, you can use any composition of these:

    Mode View? Explicit sharding? Explicit Collectives?
    Auto Global
    Explicit Global
    Manual Per-device
    from jax.sharding import set_mesh, AxisType, PartitionSpec as P
    mesh = jax.make_mesh((8,), ('data',), axis_types=(AxisType.Explicit,))
    set_mesh(mesh)
    
    # parameters are sharded for FSDP:
    for W, b in params:
      print(f'{jax.typeof(W)}')  # f32[512@data,512]
      print(f'{jax.typeof(b)}')  # f32[512]
    
    # shard data for batch parallelism:
    inputs, targets = jax.device_put((inputs, targets), P('data'))
    
    # evaluate gradients, automatically parallelized!
    gradfun = jax.jit(jax.grad(loss))
    param_grads = gradfun(params, (inputs, targets))

    See the tutorial and advanced guides for more.

    Gotchas and sharp bits

    See the Gotchas Notebook.

    Installation

    Supported platforms

    Linux x86_64 Linux aarch64 Mac aarch64 Windows x86_64 Windows WSL2 x86_64
    CPU yes yes yes yes yes
    NVIDIA GPU yes yes n/a no experimental
    Google TPU yes n/a n/a n/a n/a
    AMD GPU yes no n/a no no
    Apple GPU n/a no experimental n/a n/a
    Intel GPU experimental n/a n/a no no

    Instructions

    Platform Instructions
    CPU pip install -U jax
    NVIDIA GPU pip install -U "jax[cuda12]"
    Google TPU pip install -U "jax[tpu]"
    AMD GPU (Linux) Follow AMD’s instructions.
    Mac GPU Follow Apple’s instructions.
    Intel GPU Follow Intel’s instructions.

    See the documentation for information on alternative installation strategies. These include compiling from source, installing with Docker, using other versions of CUDA, a community-supported conda build, and answers to some frequently-asked questions.

    Citing JAX

    To cite this repository:

    @software{jax2018github,
      author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},
      title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},
      url = {http://github.com/jax-ml/jax},
      version = {0.3.13},
      year = {2018},
    }
    

    In the above bibtex entry, names are in alphabetical order, the version number is intended to be that from jax/version.py, and the year corresponds to the project’s open-source release.

    A nascent version of JAX, supporting only automatic differentiation and compilation to XLA, was described in a paper that appeared at SysML 2018. We’re currently working on covering JAX’s ideas and capabilities in a more comprehensive and up-to-date paper.

    Reference documentation

    For details about the JAX API, see the reference documentation.

    For getting started as a JAX developer, see the developer documentation.

    Visit original content creator repository https://github.com/jax-ml/jax
  • linkerd2-operator

    Go Report Card codecov

    Linkerd2 Operator

    ⚠️ Project status (pre-alpha)

    The project is in active development and things are rapidly changing.

    Overview

    This Linkerd2 operator is a simple operator that takes care of deploying all the components of Linkerd2

    Prerequisites

    • go version v1.13+.
    • docker version 17.03+
    • kubectl v1.14.1+
    • operator-sdk
    • [kustomize][kustomize_tool]
    • Access to a Kubernetes v1.14.1+ cluster

    Getting Started

    Cloning the repository

    Checkout this repository

    $ mkdir -p $GOPATH/src/github.com/spaghettifunk
    $ cd $GOPATH/src/github.com/spaghettifunk
    $ git clone https://github.com/spaghettifunk/linkerd2-operator.git
    $ cd linkerd2-operator
    

    Installing

    There is a Makefile for convenience. Run the following commands to start

    export POD_NAMESPACE=linkerd
    make install
    make run

    In a new shell, you can now create the Linkerd deployments. There is an example of usage in the config/sample/linkerd.example.yaml. Use that for testing. Assuming you want to use that file, run the following

    kubectl create ns linkerd
    kubectl apply -f config/sample/linkerd.example.yaml

    In the previous shell you should see that Kubernetes is trying to reconcile the object.

    Uninstalling

    To uninstall all that was performed in the above step run make uninstall.

    Troubleshooting

    Use the following command to check the operator logs.

    kubectl logs deployment.apps/linkerd2-operator -n linkerd

    Running Tests

    Run make test-e2e to run the integration e2e tests with different options. For more information see the writing e2e tests guide.

    Visit original content creator repository https://github.com/spaghettifunk/linkerd2-operator
  • winKeyboard

    winKeyboard (JNI)

    A tiny Java helper for Java applications running under Windows for emulating keyboard via scan codes.

    If a Java application sends keystroke to the game using java.awt.Robot, then games which uses DirectInput API for reading keyboard input (scancodes) may have no effect.

    Here is a simple Java helper for Windows which allow Java applications to send keystrokes to the game/application by generating keyboard scancode using Java Native Interface (JNI)

    Keyboard keyboard = new Keyboard();
    keyboard.winKeyPress(ScanCode.DIK_UP);
    //Thread.sleep(1000);
    keyboard.winKeyRelease(ScanCode.DIK_UP);

    To send combination (e.g LEFT_SHIFT+A )

     kb.winPressCombination(ScanCode.DIK_LSHIFT,ScanCode.DIK_A);
     kb.winReleaseCombination(ScanCode.DIK_LSHIFT,ScanCode.DIK_A);

    LEFT_CTRL+LEFT_SHIFT+A

     kb.winPressCombination(ScanCode.DIK_LCONTROL,ScanCode.DIK_LSHIFT,ScanCode.DIK_A);
     kb.winReleaseCombination(ScanCode.DIK_LCONTROL,ScanCode.DIK_LSHIFT,ScanCode.DIK_A);

    See this in action in this video

    IMAGE ALT TEXT

    Important

    make sure you place SCGen32.dll and SCGen64.dll in java library path otherwise java.lang.UnsatisfiedLinkError will be thrown
    
    Visit original content creator repository https://github.com/umer0586/winKeyboard
  • Ti.Android.Animator

    Ti.Android.Animator

    A newer version of @Animecyc Android TitaniumAnimator

    A drop-in animation replacement for Titanium. This module’s aim is to mimick as much of the Titanium animation module as possible with the addition of new timing functions and better performance. As of right now the only properties that can be animated are: rotate, transform, top, bottom, left, right, width, height, opacity, color and backgroundColor. The transform property is not supported at this time.

    If you are animating views that don’t contain any sort of transparency you will see performance gains when animating large or otherwise complex view groups.

    Support

    • Android: 7.0+

    Usage

    Download it here

    var Animator = require('ti.android.animator');
        
    var mainWindow = Ti.UI.createWindow({
    	backgroundColor : 'white'
    });
        
    var animationView = Ti.UI.createView({
        	backgroundColor : 'red',
        	width : 100,
        	height : 100
    });
    
    animationView.addEventListener('click', function () {
    	Animator.animate(animationView, {
    		duration : 1000,
    		easing : Animator.BOUNCE_OUT,
    		width : 150,
    		height : 150,
    		backgroundColor : 'blue',
    		opacity : 0.5,
    		bottom : 0
    	}, function () {
    		Animator.animate(animationView, {
    			duration : 1000,
    			easing : Animator.BOUNCE_OUT,
    			width : 100,
    			height : 100,
    			backgroundColor : 'red',
    			opacity : 1,
    			bottom : null
    		});
    	});
    });
    
    mainWindow.add(animationView);
    
    mainWindow.open();

    Rotations

    If you need to perform a rotation you can pass the rotate property which accepts a float. The rotate property is the angle you wish to rotate to; A positive value will result in a counter-clockwise rotation, while a negative value will result in a clockwise rotation.

    Once a rotation has been performed subsequent rotations will be performed from its last rotation angle. To simplify multiple rotations you can pass values > 360. For example to do two complete rotations you can pass a value of 720.

    Layout Support

    When animating a complex layout (such as a vertical layout inside a vertical layout) it may be necessary to specify which parent to propogate the animimations from, you can do this by setting parentForAnimation and passing the proxy that holds the views that should animate. This is especially useful in cases where you are animating inside of a Ti.UI.ScrollView.

    Easing Functions

    The below easing functions can be accessed as you would any other Titanium constant. Assuming the above usage example you can access all of these by passing the below name to the module, such as in: Animator.ELASTIC_IN_OUT

    • LINEAR (default)
    • QUAD_IN
    • QUAD_OUT
    • QUAD_IN_OUT
    • CUBIC_IN
    • CUBIC_OUT
    • CUBIC_IN_OUT
    • QUART_IN
    • QUART_OUT
    • QUART_IN_OUT
    • QUINT_IN
    • QUINT_OUT
    • QUINT_IN_OUT
    • SINE_IN
    • SINE_OUT
    • SINE_IN_OUT
    • CIRC_IN
    • CIRC_OUT
    • CIRC_IN_OUT
    • EXP_IN
    • EXP_OUT
    • EXP_IN_OUT
    • ELASTIC_IN
    • ELASTIC_OUT
    • ELASTIC_IN_OUT
    • BACK_IN
    • BACK_OUT
    • BACK_IN_OUT
    • BOUNCE_IN
    • BOUNCE_OUT
    • BOUNCE_IN_OUT

    Visit original content creator repository
    https://github.com/deckameron/Ti.Android.Animator

  • article_analysis_gpt

    Article Analysis with GPT-3.5-turbo

    This Python script allows you to interact with a GPT-3.5-turbo model by OpenAI to analyze and summarize articles from URLs. You can ask questions about the article, and the model will answer based on the content. The script uses the newspaper3k library to extract the article content and the OpenAI API to communicate with the GPT-3.5-turbo model.

    Features

    • Extracts article content from a given URL
    • Splits the article into smaller parts if needed
    • Summarizes the article upon request
    • Allows the user to ask questions about the article
    • Interacts with the GPT-3.5-turbo model via the OpenAI API

    Installation

    1. Clone the repository:

    Article Analysis with GPT-3.5-turbo

    This Python script allows you to interact with a GPT-3.5-turbo model by OpenAI to analyze and summarize articles from URLs. You can ask questions about the article, and the model will answer based on the content. The script uses the newspaper3k library to extract the article content and the OpenAI API to communicate with the GPT-3.5-turbo model.

    Features

    • Extracts article content from a given URL
    • Splits the article into smaller parts if needed
    • Summarizes the article upon request
    • Allows the user to ask questions about the article
    • Interacts with the GPT-3.5-turbo model via the OpenAI API

    Installation

    1. Clone the repository:

    git clone https://github.com/maximedotair/article_analysis_gpt.git

    1. Change to the project directory:

    cd article-analysis-gpt

    1. Create a virtual environment and activate it:

    python3 -m venv myenv
    source myenv/bin/activate

    1. Install the required packages:

    $ pip install -r requirements.txt

    1. Add your OpenAI API key to the script:

    Replace your_api_key_here with your actual API key in the line:

    api_key = "your_api_key_here"

    Usage

    Run the script:

    $ python article_analysis.py

    Enter the URL of the article to summarize: https://example.com/article-url

    Choose to summarize the article or ask your own question:

    Do you want to summarize the article? (yes/no): yes

    Ask questions about the article:

    Ask a question (previous questions and answers are not saved) or type ‘exit’ to quit: What is the main point of the article?

    Type ‘exit’ to quit the script:

    Ask a question (previous questions and answers are not saved) or type ‘exit’ to quit: exit

    Dependencies

    OpenAI – Python library for the OpenAI API
    newspaper3k – Python library for extracting and parsing newspaper articles

    License

    This project is licensed under the MIT License.
    

    Visit original content creator repository
    https://github.com/maximedotair/article_analysis_gpt

  • milk-engine

    About The Milk Engineers

    Our Path to Greatness

    We founded a milk engine shop in 1928, with the idea of providing people the ability to buy premium quality milk products directly from our shop by cutting out the middle man. At the time, almost all milk engines were home delivered and paid for on a weekly or monthly basis.

    During the depression, we worked early mornings together to keep our milk shop running, dedicated to making our dream of a more perfect world a reality, running milk delivery routes throughout Illinois. Shop expansion continued through the 50’s and 70’s. In the early 80’s our milk marketing team became actively involved in the evolution of the Milk Engine™ brand. Convinced that milk engine sales could be expanded significantly by launching a premium line, our milk engineers developed a revolutionary milk-refinement process in 1982. It offered exotic engines, packed full of high quality milk to make the best milk products we could.

    Milk Engineering™ Magazine rated Milk Engine™ Milk Engine Model C the “Number One Exotic Milk Engine” in a 1984 competition. Over 500 engines were submitted by 100 competitors worldwide. Today, Milk Engine™’s products are still made with pure milk.

    Even though Milk Engine™ is now enjoyed all over the world, we still have the same simple approach: use the best milk to make the best engines. So, whether you’re an amateur consumer or a certified milk professional, you’ll always get milk engines made with master craftsmanship.

    Visit original content creator repository
    https://github.com/matiaskotlik/milk-engine

  • vROBUST

    Visit original content creator repository
    https://github.com/codembassy/vROBUST

  • lyricsFinder

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    yarn start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    yarn test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    yarn build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    yarn eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    About App

    This app is mainly created in admiration of the learning of context component of the reactJs.

    • libaray used in it:
    1. axios
    2. ant-design
    3. react-router-dom

    axious

    • To get data from backend in Lifecycle Hooks i.e

    componentDidMount(){
      const promis = axios.get('url')
    }
    • requests of axios

    axios
      .get('url')
      .then(function(respose) {
        //handle success
      })
      .catch(function(error) {
        //handle errors
      });

    context or context API

    • React’s context allows you to share information to any components, without any help of props.
    • Context provides a way to pass data through the component tree without having to pass props down manually at every level.

    Create file of context.jsx in root path

    • context component:
    const Context = React.createContext();
    • There are two export component :
    1. class Provider

    For adding in root file App.js
    Changing state by using the dispatch redux property

    export class Provider extends Component{
      state={
        data:[]
        dispatch:action => this.setState(state => reducer(state,action))
         // you have to define or use this element in other file with the same 'type' component in it and after that help of payload we can change the state.
      };
      componentDidMound(){
        //if you want ot change state in file by 'setState'
      }
      render(){
        return (
          <Context.Provider value={this.state}>
          {this.props.childern}
          </Context.Provider>
        );
      }
    }

    reducer component:

    const reducer = (state, action) => {
      switch (action.type) {
        case 'objcet_in_type':
          return {
            ...state,
            data: action.payload // payload is the change data that comes from the another file where the 'Consumer' used.
          };
        default:
          return state;
      }
    };
    1. const Consumer

    For adding in file where we can use the states or values that provide by the provider.

    export const Consumer = Context.Consumer;

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    yarn build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository
    https://github.com/ishanajmeri/lyricsFinder

  • dutwrapper-dart

    DutWrapper

    An unofficial wrapper at sv.dut.udn.vn – Da Nang University of Science and Technology student page.

    Version

    Building requirements

    FAQ

    Branch in dutwrapper?

    • main/stable: Default branch and main release.
    • draft: This branch will used for update my progress and it is unstable. Use it at your own risk.

    I received error about login while running AccountTest?

    • Make sure you have dut_account variable set with syntax studentid|password. This will ensure secure when testing project.

    Where can I find Wiki for this library?

    • Unfortunately, I haven’t done wiki yet.
    • Instead you can navigate source code to review them.

    Where can I find Wiki for this library?

    • Not now. Be patient.

    I’m got issue or a feature request about this library. How should I do?

    • Navigate to issue tab on this repository to create a issue or feature request.

    Credit and license?

    Visit original content creator repository https://github.com/dutwrapper/dutwrapper-dart
  • md-to-html

    md-to-html

    unified processor to parse and serialize Markdown to HTML.
    Powered by my favorite plugins.

    npm version

    Install

    > yarn add @rqbazan/md-to-html

    Usage

    Say we have the following file, doc.md

    ---
    title: This personal site
    date: 2020-12-25
    author: Santiago Q. Bazan
    ---
    
    # Sample note :tada:
    
    Here's a quick _sample note_

    And our script, index.js, looks as follows:

    import path from 'path'
    import fs from 'fs'
    import mdToHtml from '@rqbazan/md-to-html'
    
    function main() {
      const pathfile = path.join(__dirname, 'doc.md')
      const doc = fs.readFileSync(pathfile)
      const vfile = mdToHtml.processSync(doc)
    
      console.log(vfile.data) // yaml metadata
      console.log(vfile.toString()) // html
    }
    
    main()

    Now, running node index.js yields:

    λ node index.js
    {
      title: 'This personal site',
      date: '2020-12-25',
      author: 'Santiago Q. Bazan'
    }
    <h1>Sample note <img class="emoji" draggable="false" alt="🎉" src="https://twemoji.maxcdn.com/v/13.0.1/72x72/1f389.png" title="🎉"/></h1>
    <p>Here's a quick <em>sample note</em></p>
    

    Here is the generated HTML:

    <h1>
      Sample note
      <img
        class="emoji"
        draggable="false"
        alt="🎉"
        src="https://twemoji.maxcdn.com/v/13.0.1/72x72/1f389.png"
        title="🎉"
      />
    </h1>
    <p>Here's a quick <em>sample note</em></p>

    Plugins

    License

    MIT © Ricardo Q. Bazan

    Visit original content creator repository
    https://github.com/rqbazan/md-to-html