Blog

  • links-disabler

    Links Disabler

    Links Disabler

    A lightweight extension which lets you disable all links on a webpage.

    Get the extension

    Usage

    To toggle between disabling and enabling all links, simply click the extension icon (located in the upper right corner of the browser) or use the keyboard shortcut Alt + Shift + D.

    Features

    The extension curretly comes with two main features, which can be activated through the extension’s options page (see below for more details).

    Save toggle status globally and on page reload

    If this option is checked, when you hit “disable links” the change will be reflected on all tabs, and links will stay disabled even after page reload.

    Only disable links that follow a pattern

    If this option is enabled, when you hit “disable links” only links in the disable list will be disabled. All other links will stay enabled.

    This option is automatically enabled if the disable list is not empty.

    The disable list can be configured with a pattern per line, and accepts wildcards with the * sign.

    Example:

    *.google.com*
    https://www.amazon.com/stores/node/*
    http://*
    

    Configuration

    Shortcuts

    You can configure the shortcut in the Chrome shortcuts page at chrome://extensions/shortcuts

    Options

    You can configure the extension options through the extension’s options page:

    links-disabler

    Like the project?

    Buy Me A Coffee

    Bugs and feature requests

    For any issues, bugs and feature requests feel free to open an issue on Github.

    Versioning

    Releases versions on Github correspond to the relative release version on the Chrome Web Store.

    We follow Semantic Versioning. The version X.Y.Z indicates:

    • X is the major version (backward-incompatible),
    • Y is the minor version (backward-compatible), and
    • Z is the patch version (backward-compatible bug fix).
    Visit original content creator repository https://github.com/fabiosangregorio/links-disabler
  • dita-ot-helper

    dita-ot-helper

    CI DeepScan Grade vulnerabilities NPM package version Minzipped npm package size

    A little helper for automating some of the more tedious tasks of automation with the DITA Open Toolkit

    Note: This README is compiled (using dita-ot-helper) from the DITA sources in the /samples/readme folder. Direct changes to the README.md will get overwritten.

    Abstract

    At its core, the goal of this project is to create an abstraction layer for the DITA Open Toolkit compilation process that “fixes” a few pain-points. These “fixed” (to a degree) aspects are:

    • An easy-to-use project file system allowing easy automation
    • Installing DITA-OT in autonomous environments (such as Continuous Integration)
    • DITA OT Plugin dependencies (local and remote) for specific compilations
    • Local plugin reinstallation from a development directory. A documentation repository contains a customized PDF plugin in a folder. dita-ot-helper automatically (re-) installs this plugin before compiling.

    A config file can look as simple as this:

    {
      "input": "input.dita",
      "plugins": [
        "org.dita.pdf2",
        "./org.mypdf.plugin"
      ],
      "transtype": "companyPDF"
    }
    

    This automatically compiles using the local org.mypdf.plugin folder’s plugin. It also uses the companyPDFtranstype this plugin might defines.

    With this config file, everything involved in compiling this (without the plugin being pre-installed etc.) is running

    $ dita-ot-helper config.json
    

    from your command line. It’s as easy as that.

    Documentation

    Dependencies

    • NodeJS, v10+
    • on Windows: .NET Framework 4.5+ and Powershell 3 (preinstalled on WIndows 8+)
    • Optional: dita from the DITA Open Toolkit, version 3.5+ (can also be installed temporarily using the helper!)

    Install dita-ot-helper

    1. Install the dita-ot-helper using npm

      Run

      $ npm install -g dita-ot-helper
      

      in your command line

    dita-ot-helper is now installed on your computer.

    Compile DITA documents

    Compiling DITA documents using the dita-ot-helper

    1. Create a config.json for your project.

      The config.json defines how your document gets compiled.

      Note: You can find a few examples for configurations in the repositories samples directory. All options of the config file are documented below in the JSON Config File section.

      In this case, we want to compile a document.ditamap file using the markdown transtype. Our config.json (next to our DITA map file) could therefore look like this:

      {
          "input": "document.ditamap",
          "plugins": ["org.lwdita"],
          "transtype": "markdown"
      }
      
    2. Compile your document using the dita-ot-helper

      In your command line, run:

      $ dita-ot-helper config.json
      

      Note: By default, the DITA command output is hidden. To enable it, use the -v or –verbose argument in your command:

      $ dita-ot-helper -v config.json
      

      Tip: Compiling documents without having DITA-OT installed on your system

      It’s possible to compile documents using the helper without having DITA-OT installed. In this case, just add the -i or –install argument to your command. You can also specify a specific version of DITA-OT. This then installs the specified version of DITA-OT in a temporary location (this gets deleted after the command is run).

      This is especially useful for autonomous environments such as Continuous Integration as it allows you to compile DITA documents with one command without a lot of setup.

      After a short while, the tool outputs > Compilation successful.. The document is now compiled.

      If compilation isn’t successful, re-run the command using the –verbose option and follow the instructions in the error message shown there.

    Your document is now compiled and is in the out folder next to your config.json.

    Compile multiple documents

    Compile multiple documents with one command using glob patterns

    The CLI makes use of the node glob library. For possible glob patterns and other information, please refer to their documentation. Basic knowledge of glob patterns is required to fully understand this task.

    When you have multiple configurations, e.g., for multiple maps and/or multiple deliverables per map, it is possible to compile all of them using just one command.

    To provide an example, we’ll assume you have the following directory structure (samples/sample-3 provides a similar example):

    ./ (current working directory)
        end-user-manual-pdf.json (input => ./end-user-manual.ditamap) 
        end-user-manual-html.json (input => ./end-user-manual.ditamap)
        end-user-manual.ditamap (A DITA map)
        [...] (DITA documents)
    

    Tip: To avoid confusion, we suggest to specify individual output directories in your configuration files for each configuration. This way, each configuration will have exactly one corresponding output directory.

    1. Run the dita-ot-helper command using a glob pattern to match your configuration files

      The same patterns as with Compile DITA documents apply here. The only difference is using a glob pattern instead of the file name of a config file.

      In our example from above, we need to run

      $ dita-ot-helper end-user-manual-*.json
      

      dita-ot-helper will process (i.e., compile) all the JSON files matching the patterns.

    All configurations are compiled.

    Related information

    https://www.npmjs.com/package/glob

    Exit codes

    Exit Code Description
    0 It worked
    1 Unknown error
    2 Aborted due to missing dependencies
    3 Aborted due to non-existent or non-readable config file
    4 Aborted due to invalid config file
    5 Something went wrong while installing DITA-OT using the -i flag

    JSON Config File

    The project configuration file for the dita-ot-helper tool.

    Using a JSON config file (which is required for using dita-ot-helper), you can define:

    • required plugins
    • the project input file
    • the transtype that should get used

    The tool will then automatically install the plugins and compile the document according to those specifications.

    Below, you can find all the options you can put into your configuration file.

    Note: Your configuration file can have any possible filename. However, we recommend using dita-ot-helper.json or config.json for clarity.

    JSON object properties

    JSON field Type Description
    input string Relative (to the config.json) or absolute path to your input file. Gets passed to the -i argument of the dita command.
    output string Relative (to the config.json) or absolute path of the output directory of the compiled file. Gets passed to the -o argument of the dita command.
    propertyfile string Relative (to the config.json) or absolute path of a .properties file. Gets passed to the --propertyfile argument of the dita command.
    resource string Relative (to the config.json) or absolute path to a resource file, e.g., a map containing key definitions. Gets passed to the -r argument of the dita command.
    plugins string[] An array of plugin paths. dita-ot-helper will ensure these plugins are installed (or, if not, try to (re-) install them) before compilation. This accepts a few different types of plugin specifiers documented in the table below.
    transtype string The documents transtype. Gets passed to the -f argument of the dita command.

    Plugin specifications

    Type Behavior Example
    Plugin Name Installs (if non-existent) a plugin by its name from the registry. Similar to dita install org.lwdita
    Plugin .zip URL Installs the plugin from the plugin ZIP file URL (via the internet). Similar to dita install https://example.com/dita-ot-pdf-plugin.zip
    Plugin .zip path Installs the plugin from the plugin ZIP file path. Similar to dita install ./my-plugin.zip, /home/example/plugin.zip
    Plugin directory path (Re-) Installs a plugin from its source directory. This is especially useful if you have a customized PDF plugin inside your documentation repository as you can simply specify this plugin and let dita-ot-helper do the work of zipping, installing and using it for you. Similar to zipping the specified directory and running dita install on the zipped file. ./plugins/com.example.pdf2

    Related information

    https://www.dita-ot.org/dev/topics/build-using-dita-command.html

    Visit original content creator repository https://github.com/fliegwerk/dita-ot-helper
  • dita-ot-helper

    dita-ot-helper

    CI DeepScan Grade vulnerabilities NPM package version Minzipped npm package size

    A little helper for automating some of the more tedious tasks of automation with the DITA Open Toolkit

    Note: This README is compiled (using dita-ot-helper) from the DITA sources in the /samples/readme folder. Direct changes to the README.md will get overwritten.

    Abstract

    At its core, the goal of this project is to create an abstraction layer for the DITA Open Toolkit compilation process that “fixes” a few pain-points. These “fixed” (to a degree) aspects are:

    • An easy-to-use project file system allowing easy automation
    • Installing DITA-OT in autonomous environments (such as Continuous Integration)
    • DITA OT Plugin dependencies (local and remote) for specific compilations
    • Local plugin reinstallation from a development directory. A documentation repository contains a customized PDF plugin in a folder. dita-ot-helper automatically (re-) installs this plugin before compiling.

    A config file can look as simple as this:

    {
      "input": "input.dita",
      "plugins": [
        "org.dita.pdf2",
        "./org.mypdf.plugin"
      ],
      "transtype": "companyPDF"
    }
    

    This automatically compiles using the local org.mypdf.plugin folder’s plugin. It also uses the companyPDFtranstype this plugin might defines.

    With this config file, everything involved in compiling this (without the plugin being pre-installed etc.) is running

    $ dita-ot-helper config.json
    

    from your command line. It’s as easy as that.

    Documentation

    Dependencies

    • NodeJS, v10+
    • on Windows: .NET Framework 4.5+ and Powershell 3 (preinstalled on WIndows 8+)
    • Optional: dita from the DITA Open Toolkit, version 3.5+ (can also be installed temporarily using the helper!)

    Install dita-ot-helper

    1. Install the dita-ot-helper using npm

      Run

      $ npm install -g dita-ot-helper
      

      in your command line

    dita-ot-helper is now installed on your computer.

    Compile DITA documents

    Compiling DITA documents using the dita-ot-helper

    1. Create a config.json for your project.

      The config.json defines how your document gets compiled.

      Note: You can find a few examples for configurations in the repositories samples directory. All options of the config file are documented below in the JSON Config File section.

      In this case, we want to compile a document.ditamap file using the markdown transtype. Our config.json (next to our DITA map file) could therefore look like this:

      {
          "input": "document.ditamap",
          "plugins": ["org.lwdita"],
          "transtype": "markdown"
      }
      
    2. Compile your document using the dita-ot-helper

      In your command line, run:

      $ dita-ot-helper config.json
      

      Note: By default, the DITA command output is hidden. To enable it, use the -v or –verbose argument in your command:

      $ dita-ot-helper -v config.json
      

      Tip: Compiling documents without having DITA-OT installed on your system

      It’s possible to compile documents using the helper without having DITA-OT installed. In this case, just add the -i or –install argument to your command. You can also specify a specific version of DITA-OT. This then installs the specified version of DITA-OT in a temporary location (this gets deleted after the command is run).

      This is especially useful for autonomous environments such as Continuous Integration as it allows you to compile DITA documents with one command without a lot of setup.

      After a short while, the tool outputs > Compilation successful.. The document is now compiled.

      If compilation isn’t successful, re-run the command using the –verbose option and follow the instructions in the error message shown there.

    Your document is now compiled and is in the out folder next to your config.json.

    Compile multiple documents

    Compile multiple documents with one command using glob patterns

    The CLI makes use of the node glob library. For possible glob patterns and other information, please refer to their documentation. Basic knowledge of glob patterns is required to fully understand this task.

    When you have multiple configurations, e.g., for multiple maps and/or multiple deliverables per map, it is possible to compile all of them using just one command.

    To provide an example, we’ll assume you have the following directory structure (samples/sample-3 provides a similar example):

    ./ (current working directory)
        end-user-manual-pdf.json (input => ./end-user-manual.ditamap) 
        end-user-manual-html.json (input => ./end-user-manual.ditamap)
        end-user-manual.ditamap (A DITA map)
        [...] (DITA documents)
    

    Tip: To avoid confusion, we suggest to specify individual output directories in your configuration files for each configuration. This way, each configuration will have exactly one corresponding output directory.

    1. Run the dita-ot-helper command using a glob pattern to match your configuration files

      The same patterns as with Compile DITA documents apply here. The only difference is using a glob pattern instead of the file name of a config file.

      In our example from above, we need to run

      $ dita-ot-helper end-user-manual-*.json
      

      dita-ot-helper will process (i.e., compile) all the JSON files matching the patterns.

    All configurations are compiled.

    Related information

    https://www.npmjs.com/package/glob

    Exit codes

    Exit Code Description
    0 It worked
    1 Unknown error
    2 Aborted due to missing dependencies
    3 Aborted due to non-existent or non-readable config file
    4 Aborted due to invalid config file
    5 Something went wrong while installing DITA-OT using the -i flag

    JSON Config File

    The project configuration file for the dita-ot-helper tool.

    Using a JSON config file (which is required for using dita-ot-helper), you can define:

    • required plugins
    • the project input file
    • the transtype that should get used

    The tool will then automatically install the plugins and compile the document according to those specifications.

    Below, you can find all the options you can put into your configuration file.

    Note: Your configuration file can have any possible filename. However, we recommend using dita-ot-helper.json or config.json for clarity.

    JSON object properties

    JSON field Type Description
    input string Relative (to the config.json) or absolute path to your input file. Gets passed to the -i argument of the dita command.
    output string Relative (to the config.json) or absolute path of the output directory of the compiled file. Gets passed to the -o argument of the dita command.
    propertyfile string Relative (to the config.json) or absolute path of a .properties file. Gets passed to the --propertyfile argument of the dita command.
    resource string Relative (to the config.json) or absolute path to a resource file, e.g., a map containing key definitions. Gets passed to the -r argument of the dita command.
    plugins string[] An array of plugin paths. dita-ot-helper will ensure these plugins are installed (or, if not, try to (re-) install them) before compilation. This accepts a few different types of plugin specifiers documented in the table below.
    transtype string The documents transtype. Gets passed to the -f argument of the dita command.

    Plugin specifications

    Type Behavior Example
    Plugin Name Installs (if non-existent) a plugin by its name from the registry. Similar to dita install org.lwdita
    Plugin .zip URL Installs the plugin from the plugin ZIP file URL (via the internet). Similar to dita install https://example.com/dita-ot-pdf-plugin.zip
    Plugin .zip path Installs the plugin from the plugin ZIP file path. Similar to dita install ./my-plugin.zip, /home/example/plugin.zip
    Plugin directory path (Re-) Installs a plugin from its source directory. This is especially useful if you have a customized PDF plugin inside your documentation repository as you can simply specify this plugin and let dita-ot-helper do the work of zipping, installing and using it for you. Similar to zipping the specified directory and running dita install on the zipped file. ./plugins/com.example.pdf2

    Related information

    https://www.dita-ot.org/dev/topics/build-using-dita-command.html

    Visit original content creator repository https://github.com/fliegwerk/dita-ot-helper
  • xmovie

    xmovie

    Build Status Documentation Status pre-commit.ci Status codecov % License: MIT DOI conda-forge package version conda-forge download count PyPI package version

    A simple way of creating beautiful movies from xarray objects.

    With ever-increasing detail, modern scientific observations and model results lend themselves to visualization in the form of movies.

    Not only is a beautiful movie a fantastic way to wake up the crowd on a Friday afternoon of a weeklong conference, but it can also speed up the discovery process, since our eyes are amazing image processing devices.

    This module aims to facilitate movie rendering from data objects based on xarray objects.

    Xarray already provides a way to create quick and beautiful static images from your data using Matplotlib. Various packages provide facilities for animating Matplotlib figures.

    But it can become tedious to customize plots, particularly when map projections are used.

    The main aims of this module are:

    • Enable quick but high-quality movie frame creation from existing xarray objects with preset plot functions — create a movie with only 2 lines of code.
    • Provide high quality, customizable presets to create stunning visualizations with minimal setup.
    • Convert your static plot workflow to a movie with only a few lines of code, while maintaining all the flexibility of xarray and Matplotlib.
    • Optionally, use Dask for parallelized frame rendering.

    Installation

    The easiest way to install xmovie is via conda:

    conda install -c conda-forge xmovie
    

    You can also install via pip:

    pip install xmovie
    

    Documentation

    Check out the examples and API documentation at https://xmovie.readthedocs.io.

    Quickstart

    High-quality movies and gifs can be created with only a few lines

    import xarray as xr
    from xmovie import Movie
    
    ds = xr.tutorial.open_dataset('air_temperature').isel(time=slice(0,150))
    mov = Movie(ds.air)
    mov.save('movie.mp4')

    Saving a .gif is as easy as changing the filename:

    mov.save('movie.gif')

    That is it! Now pat yourself on the shoulder and enjoy your masterpiece.

    The GIF is created by first rendering a movie and then converting it to a GIF. If you want to keep both outputs you can simply do mov.save('movie.gif', remove_movie=False)

    Visit original content creator repository https://github.com/jbusecke/xmovie
  • xap.sh

    xap.sh

    XAP – XFCE Actions Patcher

    Bash script to patch XFCE Thunar to be able to use custom actions everywhere.
    By default, most actions are disabled in network folders, Desktop, etc.
    Even the default action Open terminal here is disabled in network shares.

    XAP works only on debian-based distros like Debian, Ubuntu and Mint with XFCE4.
    If you are interested on patching a non debian-based system, open a new issue and I may try to help.

    xap.sh will install apt-get packages and dependencies without confirmation,
    it will only ask for confirmation to install the modified thunar package.

    xap.sh will ask for confirmation before deleting the work folder or installing the patched thunar.
    It is possible to avoid the confirmations by using the execution parameters -f, -k, -d.

    Disclaimer

    XAP should be safe and has been tested on Ubuntu16+XFCE4, Xubuntu16, Mint17.3 (32bit & 64bit), however…
    Thunar is part of the XFCE system, I am not reponsible for any harm caused by this script.
    USE AT YOUR OWN RISK!

    How To

    Enable source-code repository

    This script uses apt-get build-dep(s) to prepare the environment.
    In order to use this, you will need to have source-code repositories enabled.
    The easiest way is through Software Updater → Settings → Ubuntu Software Tab → Source code.
    Screenshots for Xubuntu16 available at the end of this page.

    Download

    NOTE: If you use git clone or the zip, xap.sh will detect the local patch file and use it, instead of downloading it from the internet. This will also remove the dependency on curl or wget and disable the --mirror parameter.

    Clone using git

    git clone https://github.com/tavinus/xap.sh.git
    cd xap.sh
    ./xap.sh --help
    

    Manual zip download

    1. Download the master zip file.
    2. Extract the zip and open a terminal inside the xap.sh folder
    3. If xap.sh is not set as executable, use:
      • chmod +x ./xap.sh
    4. You should be all set:
      • ./xap.sh --help

    Get only xap.sh using wget

    wget 'https://raw.githubusercontent.com/tavinus/xap.sh/master/xap.sh' -O ./xap.sh && chmod +x ./xap.sh
    ./xap.sh --help
    

    Get only xap.sh using curl

    curl -L 'https://raw.githubusercontent.com/tavinus/xap.sh/master/xap.sh' -o ./xap.sh && chmod +x ./xap.sh
    ./xap.sh --help
    

    Example Runs

    Just run

    $ ./xap.sh 
    
    XAP - XFCE Actions Patcher v0.0.1
    
    Checking for sudo executable and privileges, enter your password if needed.
    [sudo] password for tavinus: 
    Done | Updating package lists
    Done | Installing thunar, thunar-data and devscripts
    Done | Installing build dependencies for thunar
    Done | Preparing work dir: /home/tavinus/xap_patch_temp
    Done | Getting thunar source
    Done | Downloading Patch
    Done | Testing patch with --dry-run
    Done | Applying patch
    Done | Building deb packages with dpkg-buildpackage
    Done | Locating libthunarx deb package
    
    Proceed with package install? (Y/y to install) y
    Done | Installing: libthunarx-2-0_1.6.11-0ubuntu0.16.04.1_i386.deb
    
    Success! Please reboot to apply the changes in thunar!
    
    The work directory with sources and deb packages can be removed now.
    Dir: /home/tavinus/xap_patch_temp
    
    Do You want to delete the dir? (Y/y to delete) n
    Kept working dir!
    
    Ciao
    

    No prompts, delete temp folder if already exists, keep temp files at the end

    This is also using the github mirror for the patch -m

    $ ./xap.sh -m -f -k -d
    
    XAP - XFCE Actions Patcher v0.0.1
    
    Work directory already exists! We need a clean dir to continue.
    Dir: /home/tavinus/xap_patch_temp
    Working dir removed successfully: /home/tavinus/xap_patch_temp
    Checking for sudo executable and privileges, enter your password if needed.
    Done | Updating package lists
    Done | Installing thunar thunar-data devscripts, we need these up-to-date
    Done | Installing build dependencies for thunar
    Done | Preparing work dir: /home/tavinus/xap_patch_temp
    Done | Getting thunar source
    Done | Downloading Patch
    Done | Testing patch with --dry-run
    Done | Applying patch
    Done | Building deb packages with dpkg-buildpackage
    Done | Locating libthunarx deb package
    Done | Installing: libthunarx-2-0_1.6.11-0ubuntu0.16.04.1_i386.deb
    
    Success! Please reboot to apply the changes in thunar!
    
    Keeping work dir: /home/tavinus/xap_patch_temp
    Ciao
    

    Following log file during execution

    Use this command on another terminal window
    You need to run this AFTER starting xap.sh, obviously

    tail -f /tmp/xap_run.log
    

    Options

    $ ./xap.sh --help
    
    XAP - XFCE Actions Patcher v0.0.1
    
    Usage: xap.sh [-f -d -k]
    
    Options:
      -V, --version           Show program name and version and exits
      -h, --help              Show this help screen and exits
      -m, --mirror            Use my github mirror for the patch, instead of
                              the original xfce bugzilla link.
          --debug             Debug mode, prints to screen instead of logfile.
                              It is usually better to check the logfile:
                              Use: tail -f /tmp/xap_run.log # in another terminal
      -f, --force             Do not ask to confirm system install
      -d, --delete            Do not ask to delete workfolder
                              1. If it already exists when XAP starts
                              2. When XAP finishes execution with success
      -k, --keep              Do not ask to delete work folder at the end
                              Keeps files when XAP finishes with success
    
    Work Folder:
     Location: /home/tavinus/xap_patch_temp
     Use --delete and --keep together to delete at the start of execution
     (if exists) and keep at the end without prompting anything.
    
    Patch File:
     The local patch file will be always used if available, download is disabled.
     If there is no local file, wget or curl will be used to download it.
     Use the "-m" parameter to download the patch from the github mirror.
    
    Apt-get Sources:
     Please make sure you enable source-code repositories in your
     apt-sources. Easiest way is with the Updater GUI.
    
    Examples:
      ./xap.sh             # will ask for confirmations
      ./xap.sh -m          # using github mirror
      ./xap.sh -f          # will install without asking
      ./xap.sh -m -f -k -d # will not ask anything and keep temp files
      ./xap.sh -m -f -d    # will not ask anything and delete temp files
    

    Patch to be applied

    Related Links

    Screenshots

    Enabling source-code repositories

    This shows how to enable source-code download at Xubuntu16 Software Updater.

    xubuntu software update1

    xubuntu software update2

    Remove the Patch

    To restore the original unpatched Thunar use this command:

    $ sudo apt-get --reinstall install thunar thunar-data
    
    Visit original content creator repository https://github.com/tavinus/xap.sh
  • 10-idees-recues-anarchisme

    10-idees-recues-anarchisme

    This project aims to break down preconceived notions about anarchism.
    It is inspired by this booklet: http://www.groupe-germinal.org/2019/08/LivretTDPB

    You can preview this branch on StackBlitz:
    Open in StackBlitz

    🎨 Mockups

    Watch mockups on Figma: https://www.figma.com/file/esyHO1X6EbdOajKOL0I6OI/Projet-A

    🚀 Project Structure

    Inside of your Astro project, you’ll see the following folders and files:

    /
    ├── public/
    │   └── favicon.svg
    ├── src/
    │   ├── components/
    │   │   └── Card.astro
    │   ├── layouts/
    │   │   └── Layout.astro
    │   └── pages/
    │       └── index.astro
    └── package.json
    

    Astro looks for .astro or .md files in the src/pages/ directory. Each page is exposed as a route based on its file name.

    There’s nothing special about src/components/, but that’s where we like to put any Astro/React/Vue/Svelte/Preact components.

    Any static assets, like images, can be placed in the public/ directory.

    🧞 Commands

    All commands are run from the root of the project, from a terminal:

    Command Action
    npm install Installs dependencies
    npm run dev Starts local dev server at localhost:3000
    npm run build Build your production site to ./dist/
    npm run preview Preview your build locally, before deploying
    npm run astro ... Run CLI commands like astro add, astro preview
    npm run astro --help Get help using the Astro CLI
    Visit original content creator repository https://github.com/sylvainDNS/10-idees-recues-anarchisme
  • 10-idees-recues-anarchisme

    10-idees-recues-anarchisme

    This project aims to break down preconceived notions about anarchism.
    It is inspired by this booklet: http://www.groupe-germinal.org/2019/08/LivretTDPB

    You can preview this branch on StackBlitz:
    Open in StackBlitz

    🎨 Mockups

    Watch mockups on Figma: https://www.figma.com/file/esyHO1X6EbdOajKOL0I6OI/Projet-A

    🚀 Project Structure

    Inside of your Astro project, you’ll see the following folders and files:

    /
    ├── public/
    │   └── favicon.svg
    ├── src/
    │   ├── components/
    │   │   └── Card.astro
    │   ├── layouts/
    │   │   └── Layout.astro
    │   └── pages/
    │       └── index.astro
    └── package.json
    

    Astro looks for .astro or .md files in the src/pages/ directory. Each page is exposed as a route based on its file name.

    There’s nothing special about src/components/, but that’s where we like to put any Astro/React/Vue/Svelte/Preact components.

    Any static assets, like images, can be placed in the public/ directory.

    🧞 Commands

    All commands are run from the root of the project, from a terminal:

    Command Action
    npm install Installs dependencies
    npm run dev Starts local dev server at localhost:3000
    npm run build Build your production site to ./dist/
    npm run preview Preview your build locally, before deploying
    npm run astro ... Run CLI commands like astro add, astro preview
    npm run astro --help Get help using the Astro CLI
    Visit original content creator repository https://github.com/sylvainDNS/10-idees-recues-anarchisme
  • coordinate-reference-systems-ios

    Coordinate Reference Systems iOS

    Coordinate Reference Systems Lib

    The Coordinate Reference Systems Library was developed at the National Geospatial-Intelligence Agency (NGA) in collaboration with BIT Systems. The government has “unlimited rights” and is releasing this software to increase the impact of government investments by providing developers with the opportunity to take things in new directions. The software use, modification, and distribution rights are stipulated within the MIT license.

    Pull Requests

    If you’d like to contribute to this project, please make a pull request. We’ll review the pull request and discuss the changes. All pull request contributions to this project will be released under the MIT license.

    Software source code previously released under an open source license and then modified by NGA staff is considered a “joint work” (see 17 USC § 101); it is partially copyrighted, partially public domain, and as a whole is protected by the copyrights of the non-government authors and must be released according to the terms of the original open source license.

    About

    Coordinate Reference Systems is an iOS library implementation of OGC’s ‘Geographic information — Well-known text representation of coordinate reference systems’ (18-010r7) specification.

    For projection conversions between coordinates, see Projections.

    Usage

    View the latest Appledoc

    @import CoordinateReferenceSystems;
    
    // NSString *wkt = ...
    
    CRSObject *crs = [CRSReader read:wkt];
    
    CRSType type = crs.type;
    CRSCategoryType category = crs.categoryType;
    
    NSString *text = [CRSWriter write:crs];
    NSString *prettyText = [CRSWriter writePretty:crs];
    
    switch(category){
    
        case CRS_CATEGORY_CRS:
        {
            CRSCoordinateReferenceSystem *coordRefSys = (CRSCoordinateReferenceSystem *) crs;
    
            switch (type) {
                case CRS_TYPE_BOUND:
                {
                    CRSBoundCoordinateReferenceSystem *bound = (CRSBoundCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_COMPOUND:
                {
                    CRSCompoundCoordinateReferenceSystem *compound = (CRSCompoundCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_DERIVED:
                {
                    CRSDerivedCoordinateReferenceSystem *derived = (CRSDerivedCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_ENGINEERING:
                {
                    CRSEngineeringCoordinateReferenceSystem *engineering = (CRSEngineeringCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_GEODETIC:
                case CRS_TYPE_GEOGRAPHIC:
                {
                    CRSGeoCoordinateReferenceSystem *geo = (CRSGeoCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_PARAMETRIC:
                {
                    CRSParametricCoordinateReferenceSystem *parametric = (CRSParametricCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_PROJECTED:
                {
                    CRSProjectedCoordinateReferenceSystem *projected = (CRSProjectedCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_TEMPORAL:
                {
                    CRSTemporalCoordinateReferenceSystem *temporal = (CRSTemporalCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                case CRS_TYPE_VERTICAL:
                {
                    CRSVerticalCoordinateReferenceSystem *vertical = (CRSVerticalCoordinateReferenceSystem *) coordRefSys;
                    // ...
                    break;
                }
                default:
                    break;
            }
    
            // ...
            break;
        }
    
        case CRS_CATEGORY_METADATA:
        {
    
            CRSCoordinateMetadata *metadata = (CRSCoordinateMetadata *) crs;
    
            // ...
            break;
        }
    
        case CRS_CATEGORY_OPERATION:
        {
    
            CRSOperation *operation = (CRSOperation *) crs;
    
            switch (type) {
                case CRS_TYPE_CONCATENATED_OPERATION:
                {
                    CRSConcatenatedOperation *concatenatedOperation = (CRSConcatenatedOperation *) operation;
                    // ...
                    break;
                }
                case CRS_TYPE_COORDINATE_OPERATION:
                {
                    CRSCoordinateOperation *coordinateOperation = (CRSCoordinateOperation *) operation;
                    // ...
                    break;
                }
                case CRS_TYPE_POINT_MOTION_OPERATION:
                {
                    CRSPointMotionOperation *pointMotionOperation = (CRSPointMotionOperation *) operation;
                    // ...
                    break;
                }
                default:
                    break;
            }
    
            // ...
            break;
    
        }
    
    }
    

    PROJ

    // NSString *wkt = ...
    
    CRSObject *crs = [CRSReader read:wkt];
    
    CRSProjParams *projParamsFromCRS = [CRSProjParser paramsFromCRS:crs];
    NSString *projTextFromCRS = [CRSProjParser paramsTextFromCRS:crs];
    CRSProjParams *projParamsFromWKT = [CRSProjParser paramsFromText:wkt];
    NSString *projTextFromWKT = [CRSProjParser paramsTextFromText:wkt];
    

    Build

    Build

    Build this repository using Swift Package Manager:

    swift build
    

    Run tests from Xcode or from command line:

    swift test
    

    Open the Swift Package in Xcode from command line:

    open Package.swift
    

    Include Library

    Use this library via SPM in your Package.swift:

    dependencies: [
        .package(url: "https://github.com/ngageoint/coordinate-reference-systems-ios.git", branch: "release/2.0.0"),
    ]
    

    Or as a tagged release:

    dependencies: [
        .package(url: "https://github.com/ngageoint/coordinate-reference-systems-ios.git", from: "2.0.0"),
    ]
    

    Reference it in your Package.swift target:

    .target(
        name: "projections",
        dependencies: [
            .product(name: "CoordinateReferenceSystems", package: "coordinate-reference-systems-ios"),
        ],
    ),
    

    Swift

    Import the framework in Swift.

    import CoordinateReferenceSystems
    
    // var wkt: String = ...
    
    let crs : CRSObject = CRSReader.read(wkt)
    
    var type : CRSType = crs.type
    var category : CRSCategoryType = crs.categoryType()
    
    let text : String = CRSWriter.write(crs)
    let prettyText : String = CRSWriter.writePretty(crs)
    
    switch category{
    
    case .CATEGORY_CRS:
    
        let coordRefSys : CRSCoordinateReferenceSystem = crs as! CRSCoordinateReferenceSystem
    
        switch type {
        case .TYPE_BOUND:
            let bound : CRSBoundCoordinateReferenceSystem = coordRefSys as! CRSBoundCoordinateReferenceSystem
            // ...
            break
        case .TYPE_COMPOUND:
            let compound : CRSCompoundCoordinateReferenceSystem = coordRefSys as! CRSCompoundCoordinateReferenceSystem
            // ...
            break
        case .TYPE_DERIVED:
            let derived : CRSDerivedCoordinateReferenceSystem = coordRefSys as! CRSDerivedCoordinateReferenceSystem
            // ...
            break
        case .TYPE_ENGINEERING:
            let engineering : CRSEngineeringCoordinateReferenceSystem = coordRefSys as! CRSEngineeringCoordinateReferenceSystem
            // ...
            break
        case .TYPE_GEODETIC, .TYPE_GEOGRAPHIC:
            let geo : CRSGeoCoordinateReferenceSystem = coordRefSys as! CRSGeoCoordinateReferenceSystem
            // ...
            break
        case .TYPE_PARAMETRIC:
            let parametric : CRSParametricCoordinateReferenceSystem = coordRefSys as! CRSParametricCoordinateReferenceSystem
            // ...
            break
        case .TYPE_PROJECTED:
            let projected : CRSProjectedCoordinateReferenceSystem = coordRefSys as! CRSProjectedCoordinateReferenceSystem
            // ...
            break
        case .TYPE_TEMPORAL:
            let temporal : CRSTemporalCoordinateReferenceSystem = coordRefSys as! CRSTemporalCoordinateReferenceSystem
            // ...
            break
        case .TYPE_VERTICAL:
            let vertical : CRSVerticalCoordinateReferenceSystem = coordRefSys as! CRSVerticalCoordinateReferenceSystem
            // ...
            break
        default:
            break
        }
    
        // ...
        break
    
    case .CATEGORY_METADATA:
    
        let metadata : CRSCoordinateMetadata = crs as! CRSCoordinateMetadata
    
        // ...
        break
    
    case .CATEGORY_OPERATION:
    
        let operation = crs as! CRSOperation
    
        switch type {
        case .TYPE_CONCATENATED_OPERATION:
            let concatenatedOperation : CRSConcatenatedOperation = operation as! CRSConcatenatedOperation
            // ...
            break
        case .TYPE_COORDINATE_OPERATION:
            let coordinateOperation : CRSCoordinateOperation = operation as! CRSCoordinateOperation
            // ...
            break
        case .TYPE_POINT_MOTION_OPERATION:
            let pointMotionOperation : CRSPointMotionOperation = operation as! CRSPointMotionOperation
            // ...
            break
        default:
            break
        }
    
        // ...
        break
    
    default:
        break
    }

    PROJ

    // var wkt: String = ...
    
    let crs : CRSObject = CRSReader.read(wkt)
    
    let projParamsFromCRS : CRSProjParams = CRSProjParser.params(fromCRS: crs)
    let projTextFromCRS : String = CRSProjParser.paramsText(fromCRS: crs)
    let projParamsFromWKT : CRSProjParams = CRSProjParser.params(fromText: wkt)
    let projTextFromWKT : String = CRSProjParser.paramsText(fromText: wkt)
    Visit original content creator repository https://github.com/ngageoint/coordinate-reference-systems-ios
  • UMERegRobust

    UMERegRobust – Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration (ECCV 2024)

    ECCV 2024 PWC


    Image 1 Image 2

    In this work, we adopt the Universal Manifold Embedding (UME) framework for the estimation of rigid transformations and extend it, so that it can accommodate scenarios involving partial overlap and differently sampled point clouds. UME is a methodology designed for mapping observations of the same object, related by rigid transformations, into a single low-dimensional linear subspace. This process yields a transformation-invariant representation of the observations, with its matrix form representation being covariant (i.e. equivariant) with the transformation. We extend the UME framework by introducing a UME-compatible feature extraction method augmented with a unique UME contrastive loss and a sampling equalizer. These components are integrated into a comprehensive and robust registration pipeline, named UMERegRobust. We propose the RotKITTI registration benchmark, specifically tailored to evaluate registration methods for scenarios involving large rotations. UMERegRobust achieves better than state-of-the-art performance on the KITTI benchmark, especially when strict precision of $(1^\circ, 10cm)$ is considered (with an average gain of +9%), and notably outperform SOTA methods on the RotKITTI benchmark (with +45% gain compared the most recent SOTA method).

    Arxiv Link: https://www.arxiv.org/abs/2408.12380
    Paper Link: ECCV2024 Springer Version


    Method Overview

    Method


    Environment Setup

    Code was tested on:

    • Ubuntu 20.04
    • Python 3.8
    • Cuda 11.7
    • Pytorch 1.13.0+cu117

    Special Packages Used:

    Create Env:

    # Create Conda Env
    conda create umereg_conda_env python=3.8
    
    # Install CUDA Toolkit 11.7
    conda install nvidia/label/cuda-11.7.0::cuda-toolkit
    conda install conda-forge::cudatoolkit-dev
    
    # Git for Conda
    conda install git
    
    # Install Pytorch 1.13.0+cu117
    pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
    
    # Install MinkowskiEngine 
    pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --config-settings="--blas_include_dirs=${CONDA_PREFIX}/include" --config-settings="--blas=openblas"
    
    # Install Pytorch3D + torch_scatter
    pip install "git+https://github.com/facebookresearch/pytorch3d.git"
    pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+cu117.html
    
    # NKSR
    pip install -U nksr -f https://nksr.huangjh.tech/whl/torch-1.13.0+cu117.html
    
    # Other Relevant Packages
    pip install open3d
    pip install tensorboard

    Clone UMERegRobust Repository:

    git clone https://github.com/yuvalH9/UMERegRobust.git

    Datasets

    You can evaluate or train UMERegRobust on both the KITTI dataset and the nuScenes dataset.

    Please refer to the detailed datasets guidelines:


    Sampling Equalizer Module (SEM) Preprocessing

    To use the SEM to preprocess the input point cloud please use:

    python datasets/sem_preprocessing.py --dataset_mode [kitti\nuscenes] --split [train\val] --data_path path_to_input_data --output_path path_to_output

    We also supply download links to the SEM already preprocessed data for both KITTI (test, lokitt, rotkitti) and nuScenes (test, lonuscenes, rotnuscens) registration benchmarks.


    RotKITTI & RotNuscenes Registration Benchmarks

    We suggest new registration benchmarks RotKITTI and RotNuscenes, these benchmarks focus on point cloud pairs with big relative rotations in the wild (not synthetic rotations). Each benchmark contains registration problems with relative rotations ranging between 30-180 degrees. We encourage the comunity to test thier method on those benchmakrs.

    To use the benchmarks, first download the KITTI \ nuScenes datasets as described in section Datasets. Next, the registration problems (source-target pairs) are saved in the files rotkitti_metadata.npy and rotnuscenes_metadata.npy, along with there corresponding GT transformations in the files rotkitti_gt_tforms.npy and rotnuscenes_metadata.npy, respectively.


    Usage

    Eval

    1. Download the original data as described in section Datasets to data_path.
    2. Download the SEM preprocessed data as described in section SEM Preprocessing to cache_data_path.
    3. Update paths in relevant benchmark config files.
    4. Evaluate KITTI benchmarks:
      python evaluate.py --benchmark [kitti_test\lokitti\rotkitti]
    5. Evaluate nuScenes benchmarks:
      python evaluate.py --benchmark [nuscenes_test\lonuscenes\rotnuscenes]

    Train

    1. Download the original data as described in section Datasets to data_path.
    2. Run the SEM preprocessing for train and val splits as described in section SEM Preprocessing output data to cache_data_path.
    3. Update paths in relevant train config files.
    4. Train KITTI:
      python train_coloring.py --config kitti
    5. Train nuScenes benchmarks:
      python train_coloring.py --config nuscenes

    Results – KITTI Benchmarks

    KITTI Test

    Method Normal Precision
    (1.5°, 30 cm)
    Strict Precision
    (1°, 10 cm)
    FCGF 75.1 73.1
    Predetor 88.2 58.7
    CoFiNet 83.2 56.4
    GeoTrans 66.3 62.6
    GCL 93.9 78.6
    UMERegRobust 94.3 87.8

    Table1: KITTI Benchmark – Registration Recall [%]

    RotKITTI

    Method Normal Precision
    (1.5°, 30 cm)
    Strict Precision
    (1°, 10 cm)
    FCGF 11.6 3.6
    Predetor 41.6 35.0
    CoFiNet 62.5 30.1
    GeoTrans 78.5 50.1
    GCL 40.1 28.8
    UMERegRobust 81.1 73.3

    Table2: RotKITTI Benchmark – Registration Recall [%]

    LoKITTI

    Method Normal Precision
    (1.5°, 30 cm)
    Strict Precision
    (1°, 10 cm)
    FCGF 17.2 6.9
    Predetor 33.7 28.4
    CoFiNet 11.2 1.0
    GeoTrans 37.8 7.2
    GCL 72.3 26.9
    UMERegRobust 59.3 30.2

    Table3: LoKITTI Benchmark – Registration Recall [%]


    Results – nuScenes Benchmarks

    nuScenes Test

    Method Normal Precision
    (1.5°, 30 cm)
    Strict Precision
    (1°, 10 cm)
    FCGF 58.2 37.8
    Predetor 53.9 48.1
    CoFiNet 62.3 56.1
    GeoTrans 70.7 37.9
    GCL 82.0 67.5
    UMERegRobust 85.5 76.0

    Table4: nuScenes Benchmark – Registration Recall [%]

    RotNuscenes

    Method Normal Precision
    (1.5°, 30 cm)
    Strict Precision
    (1°, 10 cm)
    FCGF 5.5 5.2
    Predetor 16.5 15.7
    CoFiNet 27.0 23.6
    GeoTrans 34.3 13.1
    GCL 21.0 19.6
    UMERegRobust 51.9 39.7

    Table5: RotNuScenes Benchmark – Registration Recall [%]

    LoNuscenes

    Method Normal Precision
    (1.5°, 30 cm)
    Strict Precision
    (1°, 10 cm)
    FCGF 1.9 0.0
    Predetor 35.6 4.2
    CoFiNet 30.3 23.5
    GeoTrans 48.1 17.3
    GCL 62.3 5.6
    UMERegRobust 70.8 56.3

    Table6: LoNuScenes Benchmark – Registration Recall [%]


    Citation

    If you find this work useful, please cite:

    @inproceedings{haitman2025umeregrobust,
      title={UMERegRobust-Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration},
      author={Haitman, Yuval and Efraim, Amit and Francos, Joseph M},
      booktitle={European Conference on Computer Vision},
      pages={358--374},
      year={2025},
      organization={Springer}
    }

    Visit original content creator repository https://github.com/yuvalH9/UMERegRobust
  • nextjs-netlify-blog-template

    Next.js blogging template for Netlify

    Netlify Status MADE BY Next.js

    Next.js blogging template for Netlify is a boilerplate for building blogs with only Netlify stacks.

    There are some boilerplate or tutorials for the combination of Next.js and Netlify on GitHub. These resources have documentation and good tutorial to get started Next.js and Netlify quickly, but they are too simple to build blogs with standard features like tagging.

    Next.js blogging template for Netlify has already implemented these standard features for building blogs with only using Next.js and Netlify stacks.

    Demo

    Deploy on your environment by clicking here:

    Deploy to Netlify

    Or access the following demo site:

    Next.js blog template for Netlify

    Features

    • Tagging: organizes content by tags
    • Author: displays author names who write a post
    • Pagination: limits the number of posts per page
    • CMS: built with CMS to allow editors modifying content with the quickest way
    • SEO optimized: built-in metadata like JSON-LD
    • Shortcode: extends content writing with React component like WordPress shortcodes

    Dependencies

    Getting started

    To create your blog using the template, open your terminal, cd into the directory you’d like to create the app in, and run the following command:

    npx create-next-app your-blog --example "https://github.com/wutali/nextjs-netlify-blog-template"
    

    After that, set up your project as following the Netlify blog:

    A Step-by-Step Guide: Deploying on Netlify

    Customization

    This template is just a template and a boilerplate in which users can customize anything after the project was cloned and started. The following instructions introduce common customization points like adding new metadata or applying a new design theme.

    Styling pages by a customized theme

    All source codes related to the blog are under components and pages directory. You can modify it freely if you want to apply your design theme. All components use styled-jsx and css-modules to define their styles, but you can choose any styling libraries for designing your theme.

    The directory tree containing the blog source code are described below:

    meta: yaml files defining metadata like authors or tags
    public: images, favicons and other static assets
    src
    ├── assets: other assets using inside of components
    ├── components: pieces of components consisting of pages
    ├── content: mdx files for each post page
    ├── lib: project libraries like data fetching or pagination
    └── pages: page components managing by Next.js
    

    Organizing content by categories

    The category metadata that associates with content have the same relationship with the authors’ one. Then reference these implementations for adding new metadata:

    You understood they have four steps to add the category metadata on your project after you read the above source codes:

    1. Define the category metadata on the above Netlify config file
    2. Create an empty file named with categories.yml under meta directory
    3. Create a new module for fetching category metadata
    4. Display the category metadata on src/components/PostLayout.tsx or other components you want

    It is all you have to do. After that, you can access Netlify CMS and create new categories at any time.

    Locale settings for Netlify CMS

    Modify config.yml and index.html under public/admin directory as following instructions:

    Netlify CMS – Configuration Options #Locale

    References

    License

    MIT

    Visit original content creator repository https://github.com/wutali/nextjs-netlify-blog-template