Blog

  • demo-golang-pubsub

    Pub/Sub Demo with Golang and DragonFlyDB

    This project demonstrates the Publisher/Subscriber pattern using Golang and DragonFlyDB. It consists of two services:

    1. Publisher Service (pub.go) – Publishes messages to a channel
    2. Subscriber Service (subscriber.go) – Subscribes to the channel and processes messages

    Prerequisites

    • Go 1.21 or higher
    • Docker and Docker Compose (for containerized deployment)
    • DragonFlyDB (automatically set up with Docker Compose)

    How It Works

    1. The Publisher service connects to DragonFlyDB and periodically publishes JSON messages to a specified channel.
    2. The Subscriber service connects to the same DragonFlyDB instance and subscribes to the channel.
    3. When the Publisher sends a message, DragonFlyDB distributes it to all Subscribers.
    4. The Subscriber receives the message, processes it, and measures the latency.

    Key Concepts Demonstrated

    • Pub/Sub Pattern: Decouples message senders (publishers) from receivers (subscribers)
    • Message Distribution: One message can be received by multiple subscribers
    • Asynchronous Communication: Publishers and subscribers operate independently
    • Scalability: Easy to add more publishers or subscribers without changing the code

    Running the Demo

    Using Docker Compose

    # Start all services
    docker compose up
    
    # To run with multiple subscriber instances
    docker compose up --scale subscriber=3

    Running Locally

    1. Start DragonFlyDB:
    docker run -p 6379:6379 docker.dragonflydb.io/dragonflydb/dragonfly
    1. Run the subscriber:
    go run subscriber.go
    1. Run the publisher:
    go run pub.go

    Configuration

    Both the publisher and subscriber retrieve configuration via environment variables. The following environment variables control the configuration:

    • REDIS_ADDR – DragonFlyDB address (default: “localhost:6379”)
    • CHANNEL – Channel name (default: “messages”)

    Example environment variable usage:

    export REDIS_ADDR=localhost:6379
    export CHANNEL=custom-channel
    go run pub.go
    go run subscriber.go

    Message Format

    Messages are JSON-encoded with the following structure:

    {
      "id": "msg-123",
      "content": "This is message #123",
      "timestamp": "2023-04-18T12:34:56.789Z"
    }

    Graceful Shutdown

    Both services handle SIGINT and SIGTERM signals for graceful shutdown:

    1. Stop publishing/receiving messages.
    2. Unsubscribe from channels (for subscribers).
    3. Close DragonFlyDB connections.
    4. Exit cleanly.

    Extensions and Improvements

    • Add message acknowledgment
    • Implement message persistence
    • Add message filtering capabilities
    • Implement message replay functionality
    • Create a web interface to monitor message flow

    Visit original content creator repository
    https://github.com/jawaracloud/golang-pub-sub

  • LV-DOT

    LV-DOT: LiDAR-Visual Dynamic Obstacle Detection and Tracking for Autonomous Robots

    ROS1 Linux platform license Linux platform Linux platform

    This repository implements the LiDAR-visual Dynamic Obstacle Detection and Tracking (LV-DOT) framework which aims at detecting and tracking dynamic obstacles for robots with extremely constraint computational resources.

    The LV-DOT framework supports dynamic obstacle detection and tracking with multiple sensor configurations:

    • Camera-only mode.
    • LiDAR-only mode.
    • Combined LiDAR and camera mode.

    For additional details, please refer to the related paper available here:

    Zhefan Xu*, Haoyu Shen*, Xinming Han, Hanyu Jin, Kanlong Ye, and Kenji Shimada, “LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation”, arXiv, 2025. [preprint] [YouTube] [BiliBili]

    *The authors contributed equally.

    News

    • 2025-02-28: The GitHub code, video demos, and relavant papers for our LV-DOT framework are released. The authors will actively maintain and update this repo!

    Table of Contents

    I. Installation Guide

    The system requirements for this repository are as follows. Please ensure your system meets these requirements:

    • Ubuntu 18.04/20.04 LTS
    • ROS Melodic/Noetic

    This package has been tested on the following onboard computer:

    Please follow the instructions below to install this package.

    # This package needs ROS vision_msgs package
    sudo apt install ros-noetic-vision-msgs
    
    # Install YOLOv11 required package
    pip install ultralytics
    
    cd ~/catkin_ws/src
    git clone https://github.com/Zhefan-Xu/LV-DOT.git
    cd ..
    catkin_make
    

    II. Run Demo

    a. Run on dataset

    Please download the rosbag file from this link:

    rosbag play -l corridor_demo.bag
    roslaunch onboard_detector run_detector.launch
    

    The perception results can be visualized in Rviz as follows:

    corridor_demo.mp4

    b. Run on your device

    Please adjust the configuration file under cfg/detector_param.yaml of your LiDAR and camera device. Also, change the color image topic name in scripts/yolo_detector/yolov11_detector.py

    From the parameter file, you can find that the algorithm expects the following data from the robot:

    • LiDAR Point Cloud: /pointcloud

    • Depth image: /camera/depth/image_rect_raw

    • Color image: /camera/color/image_rect_raw

    • Robot pose: /mavros/local_position/pose

    • Robot odometry (alternative to robot pose): /mavros/local_position/odom

    Additionally, update the camera intrinsic parameters and the camera-LiDAR extrinsic parameters in the config file.

    Run the following command to launch dynamic obstacle detection and tracking.

    # Launch your sensor device first. Make sure it has the above data.
    roslaunch onboard_detector run_detector.launch
    

    The LV-DOT can be directly utilized to assist mobile robot navigation and collision avoidance in dynamic environments, as demonstrated below:

    III. LV-DOT Framework and Results

    The LV-DOT framework is shown below. Using onboard LiDAR, camera, and odometry inputs, the LiDAR and depth detection modules detect 3D obstacles, while the color detection module identifies 2D dynamic obstacles. The LiDAR-visual fusion module refines these detections, and the tracking module classifies obstacles as static or dynamic.

    Example qualitative perception results in various testing environments are shown below:

    IV. Citation and Reference

    If our work is useful to your research, please consider citing our paper.

    @article{LV-DOT,
      title={LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation},
      author={Xu, Zhefan and Shen, Haoyu and Han, Xinming and Jin, Hanyu and Ye, Kanlong and Shimada, Kenji},
      journal={arXiv preprint arXiv:2502.20607},
      year={2025}
    }
    

    V. Acknowledgement

    The authors would like to express their sincere gratitude to Professor Kenji Shimada for his great support and all CERLAB UAV team members who contribute to the development of this research.

    Visit original content creator repository https://github.com/Zhefan-Xu/LV-DOT
  • pyrasterize

    pyrasterize

    Rasterizer for pygame without additional dependencies

    Screenshot of demo_first_person.py

    The rasterizer relies on the speed of pygame’s 2D polygon drawing functions to achieve acceptable performance. As an alternative to per-pixel Gouraud shading and texturing that would be too slow an approximation is available that simulates those effects by subdividing triangles to an adjustable degree. The rasterizer ingests a hierarchical scene graph (Python dict) as the scene description.

    Demos

    Back-face culling video https://imgur.com/gallery/6xwGUk4

    Painter’s algorithm video https://imgur.com/gallery/47Z6Vle

    Scene graph video https://imgur.com/gallery/lGsBms1

    FOV video https://imgur.com/gallery/xXC56Cl

    Mouse selection video https://imgur.com/a/YxD0HCn

    Shell game demo made with pygbag on itch.io https://pelicanicious.itch.io/pyrasterize-shellgame

    Gouraud shading / draw modes demo video https://i.imgur.com/XC3njax.mp4

    First person game-like multiple features demo: https://i.imgur.com/iEafd3x.mp4 – on itch.io: https://pelicanicious.itch.io/pyrasterize-firstpersondemo

    3D labyrinth first person demo: https://i.imgur.com/zLBnNwk.mp4 video – on itch.io: https://pelicanicious.itch.io/pyrasterize-labyrinth

    Particles demo: https://i.imgur.com/WGhYuGs.mp4

    Fast pseudo Gouraud implementation demo: https://www.reddit.com/r/pygame/comments/1347a71/a_pseudo_gouraud_shading_algorithm_for_software/

    Animated meshes demo: https://i.imgur.com/w8k9R32.mp4

    Visit original content creator repository https://github.com/rkibria/pyrasterize
  • pyrasterize

    pyrasterize

    Rasterizer for pygame without additional dependencies

    Screenshot of demo_first_person.py

    The rasterizer relies on the speed of pygame’s 2D polygon drawing functions to achieve acceptable performance. As an alternative to per-pixel Gouraud shading and texturing that would be too slow an approximation is available that simulates those effects by subdividing triangles to an adjustable degree. The rasterizer ingests a hierarchical scene graph (Python dict) as the scene description.

    Demos

    Back-face culling video https://imgur.com/gallery/6xwGUk4

    Painter’s algorithm video https://imgur.com/gallery/47Z6Vle

    Scene graph video https://imgur.com/gallery/lGsBms1

    FOV video https://imgur.com/gallery/xXC56Cl

    Mouse selection video https://imgur.com/a/YxD0HCn

    Shell game demo made with pygbag on itch.io https://pelicanicious.itch.io/pyrasterize-shellgame

    Gouraud shading / draw modes demo video https://i.imgur.com/XC3njax.mp4

    First person game-like multiple features demo: https://i.imgur.com/iEafd3x.mp4 – on itch.io: https://pelicanicious.itch.io/pyrasterize-firstpersondemo

    3D labyrinth first person demo: https://i.imgur.com/zLBnNwk.mp4 video – on itch.io: https://pelicanicious.itch.io/pyrasterize-labyrinth

    Particles demo: https://i.imgur.com/WGhYuGs.mp4

    Fast pseudo Gouraud implementation demo: https://www.reddit.com/r/pygame/comments/1347a71/a_pseudo_gouraud_shading_algorithm_for_software/

    Animated meshes demo: https://i.imgur.com/w8k9R32.mp4

    Visit original content creator repository https://github.com/rkibria/pyrasterize
  • memprofiler

    memprofiler

    Build Status codecov

    Memprofiler helps to track memory allocations of your Go applications on large time intervals. Go runtime implements multiple memory management optimizations in order to achieve good performance, low heap allocation cost and high degree of memory reuse. Therefore, sometimes it may be tricky to distinguish “normal” runtime behaviour from real memory leak. If you have doubts whether your Go service is leaking, you’re on the right track. Memprofiler aims to be an open source equivalent of stackimpact.com.

    Warning: The project is under active development and not ready for usage yet.

    Getting started

    Memprofiler is a client-server application. Memprofiler client is embedded into your Go service and streams memory usage reports to the Memprofiler server. Memprofiler server stores reports and performs some computations on the data stream to turn it in a small set of aggregated metrics. User will be able to interact with Memprofiler server via simple Web UI.

    Components

    Client

    To use Memprofiler in your application, run client in your main function:

    package example
    
    import (
    	"time"
    
    	"github.com/sirupsen/logrus"
    
    	"github.com/memprofiler/memprofiler/client"
    	"github.com/memprofiler/memprofiler/schema"
    	"github.com/memprofiler/memprofiler/utils"
    )
    
    func main() {
    	// prepare client configuration
    	cfg := &client.Config{
    		// server address
    		ServerEndpoint: "localhost:46219",
    		// description of your application instance
    		ServiceDescription: &schema.ServiceDescription{
    			ServiceType:     "test_application",
    			ServiceInstance: "node_1",
    		},
    		// granularity
    		Periodicity: &utils.Duration{Duration: time.Second},
    		// logging setting
    		Verbose: false,
    	}
    
    	// you can implement your own logger
    	log := client.LoggerFromLogrus(logrus.New())
    
    	// run profiler and stop it explicitly on exit
    	profiler, err := client.NewProfiler(log, cfg)
    	if err != nil {
    		panic(err)
    	}
        profiler.Start()
    	defer profiler.Quit()
    
    	// ...
    }

    Server

    To run Memprofiler server, just install it and prepare server config (you can refer to config example).

     ✗ GO111MODULE=on go get github.com/memprofiler/memprofiler
     ✗ memprofiler -c config.yml 
    DEBU[0000] Starting storage                             
    DEBU[0000] Starting metrics computer                    
    INFO[0000] HTTP Frontend server resource                 URL=/schema.MemprofilerFrontend/GetSessions subsystem=frontend
    INFO[0000] HTTP Frontend server resource                 URL=/schema.MemprofilerFrontend/GetServices subsystem=frontend
    INFO[0000] HTTP Frontend server resource                 URL=/schema.MemprofilerFrontend/GetInstances subsystem=frontend
    INFO[0000] HTTP Frontend server resource                 URL=/schema.MemprofilerFrontend/SubscribeForSession subsystem=frontend
    INFO[0000] Starting service                              service=backend
    INFO[0000] Starting service                              service=frontend
    
    Visit original content creator repository https://github.com/memprofiler/memprofiler
  • GoFetch

    Go Fetch! ( Live )

    GoFetch is a web application written in NextJS / React, and styled with TailwindCSS; It allows users to view a stream of images from the Dog CEO API, being able to sort by breed. Images are converted to AVIF & WebP by NextJS, and cached locally, so as to be considerate and keep excessive calls to the 3rd party API to a minimum.

    functionality: view dogs, upload dogs

    Pet-Form as a Service (PaaS)

    Additionally, it connects to the backend, written in Go, to “GoFetchBot”, a github application that accepts user submitted images of their dogs; scans them with a Neural Network for compliance (images must contain a dog, and may not contain a human, for GDPR reasons). If the image is acceptable, it creates a pull request with the image as a commit, for final human approval.

    To reiterate: that makes it a Pet Form as a Service, using Neural Networks.

    makes pull requests

    uses neural networks

    Technical Details

    • Golang, with 100% test coverage, tested with go test, using TDD & sticking to a “Given When Then” syntax
    • Some limited End to End Testing
    • using GoCV and Yolov3 neural network
    • NextJS / React, styled with TailwindCSS
    • Deployed to a personally administered docker server

    Forking

    I don’t expect much attention on this repo; but this should be fairly easy to fork. The main requisite is either setting up openCV, or being comfortable using a docker container where openCV has been installed. The tests were written with a mock github repo (that has a LOT of closed PRs), so you will have to set up an application to get the same test coverage. Forker Beware: I am not proud of some of the React code, but I am pretty proud of the Go code, so…

    Other

    Go ahead and shoot me an email or make an issue or reach out on my socials if this is interesting to you or you want to chat!

    License

    GPLv3

    Visit original content creator repository https://github.com/jjgmckenzie/GoFetch
  • localtower

    Gem Gem

    Localtower

    Introduction

    New Model

    – What is Localtower?

    Localtower is a Rails Engine mountable in development environment to help you generate migrations for your Rails application. It’s like ActiveAdmin or Sidekiq UI. You plug it in your config/routes.rb and it works out of the box. Check the Installation section below for more details.

    – How Localtower works?

    Localtower gives you a UI to create models and migrations. It will generate a migration file like you would do with rails generate migration add_index_to_users. You will see the generated file in db/migrate/ folder.

    – Why creating a UI for Rails migrations?

    Rails migrations are well documented in the official Rails Guides but we often tend to forget some commands or do typo errors. Like writing add_index :user, :email instead of add_index :users, :email (did you spot the typo?). Working from a UI with a fixed list of commands reduces the chance of making errors.

    – When I’m using Localtower, can I still generate migrations from the command line?

    Of course! Localtower does not lock you up. You can still generate migrations like you did before. Localtower is just a migration generator. You can also generate a migration from Localtower and then edit it manually before running rails db:migrate

    – What does happen when I want to remove Localtower?

    You just have to remove the gem from your Gemfile, run bundle, remove the engine in config/routes.rb, and that’s it! All your previous migrations will stay in db/migrate/. You are never locked up with Localtower. You can install or uninstall anytime. Remember, it is just a UI to generate files. Do not hesitate to open an issue on Github and tell me why you don’t want it anymore. It will be very valuable for me to understand what I can do better ❤.

    – Cool, but there are some migration options that are not available in Localtower, what can I do?

    Localtower doesn’t implement all the Rails Migrations API. I focused on the most common scenarios. If you need to do something tricky in your migrations, you can still edit the migrations manually. You are also welcome to open an issue on Github to ask for a specific feature. I’m always open to extend the possibilities of Localtower.

    Screenshots

    Create a model

    New Model

    Create a migration

    New Migration

    See the Migrations (and migrate)

    Migrations

    Installation

    Please use the best localtower version: >= 2 See installation process below.

    Compatibility:

    • Rails >= 5.2
    • Ruby >= 2.3

    Add to your Gemfile file:

    group :development do
      gem 'localtower', '~> 2'
    end

    Run command in your terminal:

    bundle install

    Add to your config/routes.rb:

    MyApp::Application.routes.draw do
      if Rails.env.development?
        mount Localtower::Engine, at: 'localtower'
      end
    
      # Your other routes here:
      # ...
    end

    ⚠ IMPORTANT ⚠

    Change your config/environments/development.rb:

    Rails.application.configure do
      # This is the default:
      # config.active_record.migration_error = :page_load
    
      # Change it to:
      config.active_record.migration_error = false if defined?(Localtower)
    
      # ...
    end

    If you know how to override this configuration in the gem instead of doing it in your app code, please open an issue and tell me your solution.

    Usage

    To access the UI, run your local rails server and open your browser at http://localhost:3000/localtower.

    Full scenario

    Demo (2min)

    Localtower v2 demo

    Create a model

    It will create a migration file:

    Create a migration

    It will generate a migration file:

    Create another model

    Now, we add a Book model:

    All the migrations generated

    Files generated

    Every action made from the UI will generate native Rails migration files. Exactly like the rails generate command. But instead of generating files in the console, they are generated in the db/migrate folder.

    • The models:

    • The migration files:

    • The final schema:

    Upgrading

    I recommend you to upgrade to the latest version which is 2.X.X. Be sure you have this in your Gemfile:

    group :development do
      gem 'localtower', '~> 2'
    end

    To upgrade, just use the latest version of Localtower.

    bundle update localtower
    

    Then restart your server.

    Contribute

    Thanks for reporting issues, I’ll do my best to fix the bugs 💪

    ga

    Run test

    If you want to contribute to the gem:

    Create a spec/dummy/.env file with the credentials to your PostgreSQL Database. It should look like this:

    LOCALTOWER_PG_USERNAME="admin"
    LOCALTOWER_PG_PASSWORD="root_or_smething"
    

    drop / create database:

    cd spec/dummy
    bundle exec rails db:drop
    bundle exec rails db:create
    rm app/models/*.rb

    Run the spec:

    bundle install
    bundle exec rspec spec/

    Deploy latest gem version

    Only for official contributors.

    git tag vX.X.X # change by last version
    git push --tags
    rm *.gem
    gem build localtower.gemspec
    gem push localtower-*.gem
    

    Notes

    Do not hesitate to open issues if you have troubles using the gem.

    Visit original content creator repository https://github.com/damln/localtower
  • Resume-Creator

    Resume-Creator

    About the Project

    This is web application to create a resume easily by filling forms and preview the output and you can download as a pdf file.

    Tech stack

    • JavaScript
    • Bootstrap
    • CSS
    • HTML

    How to use

    • To add section click on the button and fill the form.
    • Submit the form and show the info in the preview section.
    • If you want to edit anything click on it and edit in the form then submit.
    • You can preview and Download as PDF file.
    • To download click on preview and download pdf button.
    • Choose save as pdf.
    • You can customize PDF output properties such as margin and scale.
    • You can use the website on different devices

    Screenshots

    demo.mp4

    This is an example

    example

    Live Preview

    Feel free to visit Resume Creator website , try it and send me feedback.

    Updates

    • Update #1
      • Reorder the sections as you want.
      • Edit anything you entered , just click on it.
      • Prevent page reload while typing or editing.
      • Fixed preview and download on mobile issue.
      • Improved user interface.

    Future work

    • Make the application more interactive with the user.
    • Working on server side.
    • Make real time writing preview.
    • Add extra features
    Visit original content creator repository https://github.com/Mohamad-Khalid/Resume-Creator
  • file_data

    file_data

    Build Status Coverage Status Code Climate Gem Version

    Ruby library that reads file metadata.

    Basic Usage for an Exif File

    require 'file_data'
    
    ## Step 1: Read in the exif data using either a file path or a stream
    
    # Using a file path...
    file_path = '/home/user/desktop/my_file.jpg' # Path to an exif file
    exif_data = FileData::Exif.all_data(file_path) # read in all of the exif data from the file path
    
    # Or using a stream...
    exif_data = File.open(file_path, 'rb') do |f|
      FileData::Exif.all_data(f)
    end
    
    ## Step 2: Data is divided into image data and thumbnail data. Pick which you want to work with.
    
    # Both objects are hash-like and should respond to all hash method except .length which instead will return the value of :Image_Structure_Length,
    image_data = exif_data.image
    thumbnail_data = exif_data.thumbnail
    
    ## Step 3: Extract the tag values
    
    ### Step 3A: Extract tags with a known name (ones that are listed in the "Known Tag Keys" section below)
    
    # Convenience methods are added for the names after the last underscore in the known tag names (casing and underscores are ignored in the convenince method names)
    
    bits_per_sample = image_data.BitsPerSample # Gets :Image_Structure_BitsPerSample from the :Tiff section
    bits_per_sample = image_data.bits_per_sample # Also gets :Image_Structure_BitsPerSample from the :Tiff section
    
    ### Step 3B: Extract tags without a known name (ones NOT listed in the "Known Tag Keys" section below)
    
    # Use the format "#{ifd_id}-#{tag_id}" for the unknown tag to key into the data
    unknown_gps_tag_value = image_data["34853-99999"]

    Known Tag Keys

    Below is the contents of FileData::ExifTags.tag_groups which lists all known tag keys and their uniquely assigned names

    # Tiff Tags (0th and 1st IFDs)
    FileData::ExifTags.tag_groups[:Tiff] =
      {
        256 => :Image_Structure_Width,
        257 => :Image_Structure_Length,
        258 => :Image_Structure_BitsPerSample,
        259 => :Image_Structure_Compression,
        262 => :Image_Structure_PhotometricInterpretation,
        270 => :Other_ImageDescription,
        271 => :Other_Make,
        272 => :Other_Model,
        273 => :Recording_StripOffsets,
        274 => :Image_Structure_Orientation,
        277 => :Image_Structure_SamplesPerPixel,
        278 => :Recording_RowsPerStrip,
        279 => :Recording_StripByteCounts,
        283 => :Image_Structure_YResolution,
        284 => :Image_Structure_PlanarConfiguration,
        296 => :Image_Structure_ResolutionUnit,
        301 => :Image_Data_TransferFunction,
        305 => :Other_Software,
        306 => :Other_DateTime,
        315 => :Other_Artist,
        318 => :Image_Data_WhitePoint,
        319 => :Image_Data_PrimaryChromaticities,
        513 => :Recording_JPEGInterchangeFormat,
        514 => :Recording_JPEGInterchangeFormatLength,
        529 => :Image_Data_YCbCrCoefficients,
        530 => :Image_Structure_YCbCrSubSampling,
        531 => :Image_Structure_YCbCrPositioning,
        532 => :Image_Data_ReferenceBlackWhite,
        33_432 => :Other_Copyright
      }
    
    # Exif IFD Tags
    FileData::ExifTags.tag_groups[34_665] =
      {
        33_434 => :Exif_PictureTakingConditions_ExposureTime,
        33_437 => :Exif_PictureTakingConditions_FNumber,
        34_850 => :Exif_PictureTakingConditions_ExposureProgram,
        34_852 => :Exif_PictureTakingConditions_SpectralSensitivity,
        34_855 => :Exif_PictureTakingConditions_PhotographicSensitivity,
        34_856 => :Exif_PictureTakingConditions_OECF,
        34_864 => :Exif_PictureTakingConditions_SensitivityType,
        34_865 => :Exif_PictureTakingConditions_StandardOutputSensitivity,
        34_866 => :Exif_PictureTakingConditions_RecommendedExposureIndex,
        34_867 => :Exif_PictureTakingConditions_ISOSpeed,
        34_868 => :Exif_PictureTakingConditions_ISOSpeedLatitudeyyy,
        34_869 => :Exif_PictureTakingConditions_ISOSpeedLatitudezzz,
        36_864 => :Exif_Version_ExifVersion,
        36_867 => :Exif_DateAndTime_DateTimeOriginal,
        36_868 => :Exif_DateAndTime_DateTimeDigitized,
        37_121 => :Exif_Configuration_ComponentsConfiguration,
        37_122 => :Exif_Configuration_CompressedBitsPerPixel,
        37_377 => :Exif_PictureTakingConditions_ShutterSpeedValue,
        37_378 => :Exif_PictureTakingConditions_ApertureValue,
        37_379 => :Exif_PictureTakingConditions_BrightnessValue,
        37_380 => :Exif_PictureTakingConditions_ExposureBiasValue,
        37_381 => :Exif_PictureTakingConditions_MaxApertureValue,
        37_382 => :Exif_PictureTakingConditions_SubjectDistance,
        37_383 => :Exif_PictureTakingConditions_MeteringMode,
        37_384 => :Exif_PictureTakingConditions_LightSource,
        37_385 => :Exif_PictureTakingConditions_Flash,
        37_396 => :Exif_PictureTakingConditions_SubjectArea,
        37_386 => :Exif_PictureTakingConditions_FocalLength,
        37_500 => :Exif_Configuration_MakerNote,
        37_510 => :Exif_Configuration_UserComment,
        37_520 => :Exif_DateAndTime_SubsecTime,
        37_521 => :Exif_DateAndTime_SubsecTimeOriginal,
        37_522 => :Exif_DateAndTime_SubsecTimeDigitized,
        37_888 => :Exif_ShootingSituation_Temperature,
        37_889 => :Exif_ShootingSituation_Humidity,
        37_890 => :Exif_ShootingSituation_Pressure,
        37_891 => :Exif_ShootingSituation_WaterDepth,
        37_892 => :Exif_ShootingSituation_Acceleration,
        37_893 => :Exif_ShootingSituation_CameraElevationAngle,
        40_960 => :Exif_Version_FlashpixVersion,
        40_961 => :Exif_ColorSpace_ColorSpace,
        40_962 => :Exif_Configuration_PixelXDimension,
        40_963 => :Exif_Configuration_PixelYDimension,
        40_964 => :Exif_RelatedFile_RelatedSoundFile,
        41_483 => :Exif_PictureTakingConditions_FlashEnergy,
        41_484 => :Exif_PictureTakingConditions_SpatialFrequencyResponse,
        41_486 => :Exif_PictureTakingConditions_FocalPlaneXResolution,
        41_487 => :Exif_PictureTakingConditions_FocalPlaneYResolution,
        41_488 => :Exif_PictureTakingConditions_FocalPlanResolutionUnit,
        41_492 => :Exif_PictureTakingConditions_SubjectLocation,
        41_493 => :Exif_PictureTakingConditions_ExposureIndex,
        41_495 => :Exif_PictureTakingConditions_SensingMode,
        41_728 => :Exif_PictureTakingConditions_FileSource,
        41_729 => :Exif_PictureTakingConditions_SceneType,
        41_730 => :Exif_PictureTakingConditions_CFAPattern,
        41_985 => :Exif_PictureTakingConditions_CustomRendered,
        41_986 => :Exif_PictureTakingConditions_ExposureMode,
        41_987 => :Exif_PictureTakingConditions_WhiteBalance,
        41_988 => :Exif_PictureTakingConditions_DigitalZoomRatio,
        41_989 => :Exif_PictureTakingConditions_FocalLengthIn35mmFilm,
        41_990 => :Exif_PictureTakingConditions_SceneCaptureType,
        41_991 => :Exif_PictureTakingConditions_GainControl,
        41_992 => :Exif_PictureTakingConditions_Contrast,
        41_993 => :Exif_PictureTakingConditions_Saturation,
        41_994 => :Exif_PictureTakingConditions_Sharpness,
        41_995 => :Exif_PictureTakingConditions_DeviceSettingDescription,
        41_996 => :Exif_PictureTakingConditions_SubjectDistanceRange,
        42_016 => :Exif_Other_ImageUniqueID,
        42_032 => :Exif_Other_CameraOwnerName,
        42_033 => :Exif_Other_BodySerialNumber,
        42_034 => :Exif_Other_LensSpecification,
        42_035 => :Exif_Other_LensMake,
        42_036 => :Exif_Other_LensModel,
        42_037 => :Exif_Other_LensSerialNumber,
        42_240 => :Exif_ColorSpace_Gamma
      }
    
    # GPS IFD Tags
    FileData::ExifTags.tag_groups[34_853] =
      {
        0 => :GPS_Version,
        1 => :GPS_LatitudeRef,
        2 => :GPS_Latitude,
        3 => :GPS_LongitudeRef,
        4 => :GPS_Longitude,
        5 => :GPS_AltitudeRef,
        6 => :GPS_Altitude,
        7 => :GPS_TimeStamp,
        8 => :GPS_Satellites,
        9 => :GPS_Status,
        10 => :GPS_MeasureMode,
        11 => :GPS_DOP,
        12 => :GPS_SpeedRef,
        13 => :GPS_Speed,
        14 => :GPS_TrackRef,
        15 => :GPS_Track,
        16 => :GPS_ImgDirectionRef,
        17 => :GPS_ImgDirection,
        18 => :GPS_MapDatum,
        19 => :GPS_DestLatitudeRef,
        20 => :GPS_DestLatitude,
        21 => :GPS_DestLongitudeRef,
        22 => :GPS_DestLongitude,
        23 => :GPS_DestBearingRef,
        24 => :GPS_DestBearing,
        25 => :GPS_DestDistanceRef,
        26 => :GPS_DestDistance,
        27 => :GPS_ProcessingMethod,
        28 => :GPS_AreaInformation,
        29 => :GPS_DateStamp,
        30 => :GPS_Differential,
        31 => :GPS_HPositioningError
      }
    
    # Interoperability IFD Tags
    FileData::ExifTags.tag_groups[40_965] =
      {
        1 => :Interoperability_Index
      }

    Mpeg4 documentation

    filepath = '...' # path to an mpeg4 file
    File.open(filepath, 'rb') do |stream|
      parser = FileData::MvhdBoxParser # class that parses the box you want
      method = :creation_time # attribute to get from the parse result
      box_path = ['moov', 'mvhd'] # path to get to the box that you want
    
      # final result that you are looking for
      result = FileData::Mpeg4.get_value(stream, parser, method, *box_path)
    end
    Visit original content creator repository https://github.com/ScottHaney/file_data
  • aorura

    AORURA

    AORURA LED library, CLI, and emulator.

    Table of contents

    Protocol

    AORURA communicates via a serial connection (19200n8). All commands it supports are exactly two bytes:

    • XX turns the LED off
    • A< puts the LED into its signature shimmering “aurora” state
    • a color byte followed by ! makes the LED light up with the given color
    • a color byte followed by * makes the LED flash with the given color at a half-second interval

    AORURA responds to these commands with a single byte: Y if successful, N if not.

    There’s one more: SS. AORURA responds to this command with two bytes representing the command for its current state.

    AORURA’s initial state is B* (flashing blue).

    Valid color bytes:

    • B: blue
    • G: green
    • O: orange
    • P: purple
    • R: red
    • Y: yellow

    Library

    aorura is a library that implements the AORURA protocol.

    Usage

    Example

    use aorura::*;
    use failure::*;
    
    fn main() -> Fallible<()> {
      let mut led = Led::open("/dev/ttyUSB0")?;
    
      led.set(State::Flash(Color::Red))?;
      led.set(State::Off)?;
    
      assert_eq!(led.get()?, State::Off);
      assert_eq!(State::try_from(b"B*")?, State::Flash(Color::Blue));
    
      Ok(())
    }

    CLI

    aorura-cli is a CLI built on top of the AORURA library.

    Usage

    Usage: aorura-cli <path> [--set STATE]
           aorura-cli --help
    
    Gets/sets the AORURA LED state.
    
    Options:
      --set STATE  set the LED to the given state
    
    States: aurora, flash:COLOR, off, static:COLOR
    Colors: blue, green, orange, purple, red, yellow
    

    Example

    path=/dev/ttyUSB0
    original_state=$(aorura-cli $path)
    
    aorura-cli $path --set flash:yellow
    
    # Do something time-consuming:
    sleep 10
    
    # Revert back to the original LED state:
    aorura-cli $path --set "$original_state"

    Emulator

    aorura-emu is a PTY-based AORURA emulator. It can be used with the library or the CLI in lieu of the hardware.

    Usage

    Usage: aorura-emu <path>
           aorura-emu --help
    
    Emulates AORURA over a PTY symlinked to the given path.
    

    Hardware

    • AORURA-3 (HoloPort and HoloPort+)

      AORURA-3 photo

    • AORURA-UART-1 (HoloPort Nano)

      AORURA-UART-1 photo

    Visit original content creator repository https://github.com/lukateras/aorura