Blog

  • cps-shelf-adder

    How to use

    Create a list of books to add to a shelf by exporting it from calibre (Menu convert books, create catalog of books in library)
    Select csv as format, select in options which coloums to export. The coloum ‘id’ has to be exported everything else is optional.
    Please make sure that every line in the exported file contains a id, exporting comments and other long texts is therefore not recommended.
    Open the file in the editor and delete all lines which shall not be added to the shelf and save it.
    Important: The items (if more than the id is exported) have to be comma seperated, otherwise the import won’t function.

    Open the file mass_add_books.py, set the parameters to the right values. The id of the shelf can be found out in the browser,
    by moving with the mouse over the add to shelf element, the id of the shelf is displayed in the adress shown. Booklist is the filename from above.

    username = ‘admin’

    password = ‘admin123’

    shelf_id = ‘1’

    booklist = ‘booklist.csv’

    serveradress = ‘http://127.0.0.1:8083

    Make sure calibre-web is running, and start the script: python mass_add_books.py

    Done

    Visit original content creator repository
    https://github.com/OzzieIsaacs/cps-shelf-adder

  • RamanSpecCalibration

    RamanSpecCalibration

    Link to the article | DOI


    This work has been published in the following article:
    Toward standardization of Raman spectroscopy: Accurate wavenumber and intensity calibration using rotational Raman spectra of H2, HD, D2, and vibration–rotation spectrum of O2
    Ankit Raj, Chihiro Kato, Henryk A. Witek and Hiro‐o Hamaguchi
    Journal of Raman Spectroscopy
    10.1002/jrs.5955


    Set of functions in Python and IgorPro’s scripting language for the wavenumber calibration (x-axis) and intensity calibration (or correction of wavelength dependent sensitivity, i.e. y-axis) of Raman spectra. This repository requires the data on the rotational state (J), frequency, and the measured rotational Raman intensities from H2, HD, D2 and O2. Programs in Python and IgorPro are independent and perform the same job.

    • For wavenumber calibration, the pixel positions with error of rotational Raman bands from H2, HD, D2 and rotation-vibration bands from O2 are required, which can be obtained from band fitting. The code does Weighted Orthogonal Distance Regression (weighted ODR) for fitting x-y data pair ( corresponding to pixel – reference wavenumber), both having uncertainties, with a polynomial. Output are the obtained wavenumber axis from fit and an estimate of error.

    • For intensity calibration, the main scheme of the code is for the non-linear weighted minimization to obtain coefficients for a polynomial which represents the wavelength dependent sensitivity. The output is a curve extrapolated to same dimension as required by user for intensity calibration. An independent validation of the obtained sensitivity should be done for a measure of accuracy.


    Why we are doing this?

    Intensity calibration

    In any Raman spectrometer, light scattered by the molecules travels to the detector while passing through/by some optical components (for example, lens, mirrors, grating, etc..) In this process, the scattered light intensity is modulated by the non-uniform reflectance/transmission of the optical components. Reflectance and transmission of the optics are wavenumber dependent. The net modulation to the light intensity, defined as M(ν), over the studied spectral range can be expressed as product(s) of the wavenumber dependent performance of the ith optical element as

    Here, ci is a coefficient and wi(ν) is the wavenumber dependent transmission or reflectance of the ith optical component.

    In most cases, determining the individual performance of each optical element is a cumbersome task. Hence, we limit our focus to approximately determine the relative form of M(ν), from experimental data. By relative form, it is meant that M(ν) is normalized to unity within the studied spectral range. If M(ν) is known, then we can correct the observed intensities in the Raman spectrum by dividing those by M(ν). In general, this is the principle of all intensity calibration procedure in optical spectroscopy.

    In our work, we assume M(ν) ≅ C1(ν) C2(ν) / C0(ν) [The wavenumber dependence in not explicitly stated when C0, C1 and C2 are discussed in the following text. ] The three contributions, C0(ν) to C2(ν) are determined in two steps in this analysis.

    • In the first step, (C0 / C1) correction are determined using the wavenumber axis and the spectrum of a broad band white light source. (See example)
    • C2 is determined from the observed Raman intensities, where the reference or true intensities are known or can be computed. This can be done using (i) pure-rotational Raman bands of molecular hydrogen and isotopologues, (ii) vibration-rotation Raman bands of the same gases and (iii) vibrational Raman bands of some liquids.

    The multiplicative correction to the Raman spectrum for intensity calibration is then : (C0 / C1C2)

    The present work is concerned with the anti-Stokes and Stokes region (from -1100 to 1650 cm-1). For a similar analysis for the higher wavenumber region (from 2300 to 4200 cm-1) see this repository and article.


    Method

    Wavenumber calibration : Fit of the reference transition wavenumbers against the band position in pixels is performed to obtain the wavenumber axis(relative).

    • S. B. Kim, R. M. Hammaker, W. G. Fateley, Appl. Spectrosc. 1986, 40, 412.
    • H. Hamaguchi, Appl. Spectrosc. Rev. 1988, 24, 137.
    • R. L. McCreery, Raman Spectroscopy for Chemical Analysis, John Wiley & Sons, New York, 2000.
    • N. C. Craig, I. W. Levin, Appl. Spectrosc. 1979, 33, 475.

    Intensity calibration : Ratio of intensities from common rotational states are compared to the corresponding theoretical ratio to obtain the wavelength dependent sensitivity curve.

    • H. Okajima, H. Hamaguchi, J. Raman Spectrosc. 2015, 46, 1140. (10.1002/jrs.4731)
    • H. Hamaguchi, I. Harada, T. Shimanouchi, Chem. Lett. 1974, 3, 1405. (cl.1974.1405)

    Input data required

    Wavenumber calibration

    • List of band positions and error (in pixels) of rotational Raman spectra of H2, HD, D2 and rotational-vibrational Raman spectra of O2.

    Intensity calibration

    • List of all data required : rotational state (J), experimental band area ratio (Stokes/ anti-Stokes), theoretical band area ratio (Stokes/anti-Stokes), transition frequency (Stokes) in cm-1, transition frequency (anti-Stokes) in cm-1 and the weight (used for fit). For O2, when using the vibration-rotation transitions (S1- and O1-branch), include the data and the frequencies for these transitions. All of the above correspond to pair of observed bands originating from a common rotational state.

    See specific program’s readme regarding the use of the above data in the program for fit.

    Available programs

    • Set of Igor Procedures
    • A Python module for performing non-linear fit on the above mentioned data set to obtain the wavelength dependent sensitivity.

    Additionally, programs to compute the theoretical pure rotational Raman spectra (for H2, HD and D2) are also included.

    Usage

    Clone the repository or download the zip file. As per your choice of the programming environment ( Python or IgorPro) refer to the specific README inside the folders and proceed.

    Comments

    • On convergence of the minimization scheme in intensity calibration : The convergence of the optimization has been tested with artificial and actual data giving expected results. However, in certain cases convergence in the minimization may not be achieved based on the specific data set and the error in the intensity.

    • Accuracy of the calibration : It is highly suggested to perform an independent validation of the intensity calibration. This validation can be using anti-Stokes to Stokes intensity for determining the sample’s temperature (for checking the accuracy of wavelength sensitivity correction) and calculating the depolarization ratio from spectra (for checking the polarization dependent sensitivity correction). New ideas regarding testing the validity of intensity calibration are welcome. Please give comments in the “Issues” section of this repository.

    Credits

    Non-linear optimization in SciPy : Travis E. Oliphant. Python for Scientific Computing, Computing in Science & Engineering, 9, 10-20 (2007), DOI:10.1109/MCSE.2007.58

    Matplotlib : J. D. Hunter, “Matplotlib: A 2D Graphics Environment”, Computing in Science & Engineering, vol. 9, no. 3, pp. 90-95, 2007.

    Orthogonal Distance Regression as used in IgorPro and SciPy : (i) P. T. Boggs, R. Byrd, R. Schnabel, SIAM J. Sci. Comput. 1987, 8, 1052. (ii) P. T. Boggs, J. R. Donaldson, R. h. Byrd, R. B. Schnabel, ACM Trans. Math. Softw. 1989, 15, 348. (iii) J. W. Zwolak, P. T. Boggs, L. T. Watson, ACM Trans. Math. Softw. 2007, 33, 27. (iv) P. T. Boggs and J. E. Rogers, “Orthogonal Distance Regression,” in “Statistical analysis of measurement error models and applications: proceedings of the AMS-IMS-SIAM joint summer research conference held June 10-16, 1989,” Contemporary Mathematics, vol. 112, pg. 186, 1990.

    Support/Questions/Issues

    Please use “Issues” section for asking questions and reporting issues.


    This work has been published in the following article:
    Toward standardization of Raman spectroscopy: Accurate wavenumber and intensity calibration using rotational Raman spectra of H2, HD, D2, and vibration–rotation spectrum of O2
    Ankit Raj, Chihiro Kato, Henryk A. Witek and Hiro‐o Hamaguchi
    Journal of Raman Spectroscopy
    10.1002/jrs.5955


    Other repositories on this topic :

    The present repository is concerned with the anti-Stokes and Stokes region spanning from -1040 to 1700 cm-1 using H2, HD, D2 and O2. In a different work, spectral region in the higher wavenumber(from 2300 to 4200 cm-1) was investigated.

    Accurate intensity calibration of multichannel spectrometers using Raman intensity ratios
    Ankit Raj, Chihiro Kato, Henryk A. Witek and Hiro‐o Hamaguchi
    Journal of Raman Spectroscopy
    10.1002/jrs.6221

    See online repository IntensityCalbr and the above article (JRS.6221) for more details.

    Visit original content creator repository https://github.com/ankit7540/RamanSpecCalibration
  • continue-show

    Continue your TV series where you left off

    How it works

    1. The script scans the VLC history. If any video from the working directory or its subdirectories is there it continues from the most recent.
    2. If no videos from the working directory were found in the VLC history (it is about 30 entries long so a video can run out of it) then the script scans for a history file made by itself in the root of the working directory.
    3. If the previous attempt was unsuccessful too, the first video file will be played.

    How to use (Windows)

    1. Navigate to the directory with the video files. It can contain subdirectories as well, the script scans them, too.
    2. Run the continue-show.bat script. You can create another batch script in this directory to call it as well.

    Limitations

    VLC stores a timestamp for each video it has played. It indicates where they were left off. If a video was finished then its timestamp is 0. This is the case if it was left off in the first two minutes as well, therefore the script can’t distinguish the two states. If a video was left off in the first two minutes, then the next one will be played when the script is called again.

    Visit original content creator repository
    https://github.com/megyerib/continue-show

  • geekx

    Geekx is a Free and Open Source Four Column portfolio template for Bootstrap with responsive and high quality UI created by Orbit Themes.

    View Demo | Download ZIP

    Geekx Four Col Portfolio Preview

    View Live Preview

    Status

    GitHub package version GitHub License npm Build Status dependencies Status devDependencies Status

    Features

    • Responsive Design.
    • Developed With Bootstrap 4.
    • SEO Semantic Code.
    • Simple and Easy To Use.
    • HTML5 ready. Use the new elements with confidence.
    • Designed with progressive enhancement in mind.

    Download and Installation

    To begin using this template, choose one of the following options to get started:

        # clone the repository
        $ git clone https://github.com/orbitthemes/geekx.git
    
        # go into the directory
        $ cd geekx
    
        # install all dependencies
        $ npm install
    
        #For Development Options
        $ gulp dev

    Usage

    Basic Usage

    After downloading, simply edit the HTML and CSS files included with the template in your favorite text editor to make changes. These are the only files you need to worry about, you can ignore everything else! To preview the changes you make to the code, you can open the index.html file in your web browser.

    Advanced Usage

    After installation, run npm install and then run gulp dev which will open up a preview of the template in your default browser, watch for changes to core template files, and live reload the browser when changes are saved. You can view the gulpfile.js to see which tasks are included with the dev environment.

    Gulp Tasks

    • gulp the default task that builds everything.
    • gulp dev browserSync opens the project in your default browser and live reloads when changes are made.
    • gulp css:compile compiles the SCSS into CSS file.
    • gulp css:minify minifies the compiled CSS file.
    • gulp css compiles SCSS files into CSS and minify the css.
    • gulp js Combines all js scripts to one file named main.js, Minify the file, and save it as main.min.js.
    • gulp export copies dependencies from node_modules to the dist directory.
    • gulp clean Removes all the directories inside dist, minified js files and all compiled css files.

    Bugs and Issues

    Have a bug or an issue with this template? Open a new issue on GitHub or leave a comment on the template overview page at Orbit Themes.

    Custom Builds

    You can hire Orbit Themes to create a custom build of any template, or create something from scratch using Bootstrap. For more information, visit the Contact Page.

    Other Templates

    • Album Plus – Album Plus is a Simple Photography and Magazine template for Bootstrap 4.
    • Blog – Blog Is The Beautiful Blogger Template For Bootstrap 4.
    • My Shop – My Shop is a Simple E-Commerce template for Bootstrap 4.
    • Carousel Plus – Clean and Responsive Bootstrap 4 slideshow Template.
    • Checkout Plus – Simple, Clean and Stylish Bootstrap 4 Checkout Page Template.
    • Cover Plus – Cover Plus Is The Beautiful One Page Template for Bootstrap 4.
    • Dashboard – Free and Responsive admin dashboard template for bootstrap 4.
    • Healthy – Clean Responsive Fitness Landing Page For Bootstrap 4.
    • Kreative – Kreative Business Landing Page Template.
    • Pricing Plus – Clean and Responsive Pricing Page Template With High Quality UI.

    How to contribute

    To contribute, please ensure that you have stable Node.js and npm installed. Test if Gulp CLI is installed by running gulp --version. If the command isn’t found, run npm install -g gulp. For more information about installing Gulp, see the Gulp’s Getting Started!.

    To have all gulp dependencies run npm install

    If gulp is installed, follow the steps below.

    • Fork and clone the repository.
    • Run gulp dev, this will open Template on your default browser.
    • Now you can code, code and code!
    • Submit a pull request.

    About

    Orbit Themes is an open source library of free Bootstrap templates and themes. All of the free templates and themes on Orbit Themes are released under the MIT license, which means you can use them for any purpose, even for commercial projects.

    Orbit Themes was created by and is maintained by Sandeep Prasad Bhatt .

    Orbit Theme Templates and Themes are based on the Bootstrap framework created by Mark Otto and Jacob Thorton.

    Credits

    Copyright and License

    Copyright 2018 Orbit Themes. Code released under the MIT.

    Visit original content creator repository https://github.com/orbitthemes/geekx
  • ChatBot-MentalHealth-BERT

    Chatbot de Salud Mental – Versión 1.0

    Pantalla de Inicio
    Logo del Chatbot de Salud Mental

    Descripción del Proyecto

    (versión GitHub original)
    Este proyecto es un chatbot orientado a la salud mental que, mediante Procesamiento de Lenguaje Natural (PLN), analizaba los mensajes ingresados por los usuarios (ya fuera por texto o audio) para predecir su estado emocional y generar respuestas de apoyo.

    • Interacción por voz (server-side): El usuario podía hablar, y el servidor generaba un archivo de audio de respuesta usando pyttsx3.
    • Versión 1.0: Implementación básica con 11 emociones, experimental y no sustituye asesoramiento profesional.

    (versión Hugging Face Spaces)
    Actualmente, el chatbot sigue orientado a la salud mental, pero la conversión de voz (tanto STT como TTS) se hace en el navegador (usando Web Speech API). El servidor solo maneja texto (Flask + BERT).

    • Interacción por voz (client-side): El usuario habla y el navegador (JavaScript) convierte el audio a texto; el servidor responde en texto, y el navegador usa Speech Synthesis para “hablar” la respuesta.
    • No se generan archivos de audio en el servidor ni se instalan librerías TTS (pyttsx3) o STT (PyAudio).

    Tecnologías Utilizadas

    • Python: Flask (backend web), Transformers, PyTorch
    • BERT (Bidirectional Encoder Representations from Transformers)
    • Procesamiento de Lenguaje Natural (PLN)
    • Reconocimiento de Voz (SpeechRecognition en el navegador)
    • Reconocimiento de Voz y Síntesis de Voz en el Navegador** (Web Speech API)
    • Síntesis de Texto a Voz (pyttsx3/pydub)
    • HTML, CSS, JavaScript (Frontend)

    Arquitectura del Chatbot

    El pipeline principal que sigue este proyecto es:

     -> Speech Recognition -> Natural Language Understanding -> Dialog Manager <-> Task Manager
        Text-to-Speech Synthesis <- Natural Language Generation <- Dialog Manager
    
    1. Speech Recognition: El usuario habla y el navegador convierte el audio a texto (Web Speech API).
    2. Natural Language Understanding: El texto se envía a Flask, donde BERT analiza la emoción.
    3. Dialog Manager: Gestiona la lógica de la conversación y decide la respuesta.
    4. Text-to-Speech Synthesis: El chatbot genera un archivo de audio que se devuelve al navegador.

    Emociones Detectadas

    El modelo (fine-tuned en BERT) reconoce las siguientes emociones:

    • FELICIDAD
    • NEUTRAL
    • DEPRESIÓN
    • ANSIEDAD
    • ESTRÉS
    • EMERGENCIA
    • CONFUSIÓN
    • IRA
    • MIEDO
    • SORPRESA
    • DISGUSTO

    Se utilizó un dataset de ~500 muestras para cada emoción (total ~5500 filas).

    Capturas de Pantalla

    Página de Inicio

    Página de Inicio
    Página de inicio del Chatbot de Salud Mental

    Interfaz del Chatbot

    Interfaz del Chatbot
    Interfaz del Chatbot

    Reconocimiento de Voz Activado

    Reconocimiento de Voz Activado
    Indicador de grabación de voz

    Estructura del Proyecto

    ChatBot/
    ├── conversations/
    ├── data/
    │   └── emotion_dataset.csv
    ├── models/
    │   ├── bert_emotion_model/
    │   │   ├── checkpoint-1600
    │   │   ├── checkpoint-1650
    │   │   ├── config.json
    │   │   ├── model.safetensors
    │   │   ├── special_tokens_map.json
    │   │   ├── tokenizer.json
    │   │   ├── tokenizer_config.json
    │   │   ├── training_args.bin
    │   │   └── vocab.txt
    │   ├── chatbot_model.py
    │   └── responses.json
    ├── static/
    │   ├── audio/
    │   ├── css/
    │   │   └── styles.css
    │   ├── img/
    │   └── js/
    │       └── scripts.js
    ├── templates/
    │   ├── chatbot.html
    │   └── index.html
    ├── app.py
    ├── chatbot.log
    ├── error.log
    ├── requirements.txt
    └── train_model.py
    

    Instalación y Configuración

    1. Clonar el repositorio con Git LFS

    Si el proyecto usa archivos grandes (como modelos BERT), asegúrate de tener Git LFS instalado antes de clonar el repositorio.

    # Instalar Git LFS (si no lo tienes)
    git lfs install
    
    # Clonar el repositorio
    git clone https://github.com/tu-usuario/ChatBot-MentalHealth.git
    cd ChatBot-MentalHealth

    2. Crear un entorno virtual y activarlo

    python -m venv venv
    # En Windows
    venv\Scripts\activate
    # En macOS/Linux
    source venv/bin/activate

    3. Instalar dependencias

    pip install -r requirements.txt

    4. Ejecutar la aplicación

    python app.py

    La aplicación se ejecutará en http://127.0.0.1:5000/.

    Ejemplo de Código (train_model.py)

    class CustomTrainer(Trainer):
        def compute_loss(self, model, inputs, return_outputs=False, **kwargs):
            labels = inputs.get("labels").to(model.device)
            outputs = model(**inputs)
            logits = outputs.get("logits")
            loss = custom_loss(labels, logits)  # Pérdida con class_weights
            return (loss, outputs) if return_outputs else loss
    
    def custom_loss(labels, logits):
        loss_fct = torch.nn.CrossEntropyLoss(weight=class_weights)
        return loss_fct(logits, labels)

    De esta forma, cada emoción recibe un peso distinto, mitigando el riesgo de que el modelo ignore las clases menos representadas.

    Flujo de Uso de los Archivos en el Proyecto

    1. Cargar el Modelo: Los pesos del modelo están en model.safetensors junto con config.json, tokenizer.json, etc.
    2. Tokenización: Se convierte la entrada (texto) en tokens con el tokenizer de BERT (tokenizer.json, vocab.txt).
    3. Inferencia: El texto del usuario se procesa con BERT para predecir la emoción y generar una respuesta.
    4. Respuesta: Se envía el texto de vuelta al navegador y, si se activa la síntesis de voz, se genera un archivo de audio.

    Notas Finales

    • Esta versión (1.0) es experimental y no sustituye asesoramiento profesional en salud mental.
    • Se recomienda seguir refinando el modelo, incorporar más emociones y ampliar la base de datos.
    • En caso de emergencia o situación de riesgo, busca ayuda de un profesional de la salud mental.
    • Destacamos la diferencia entre la versión original (generaba .mp3 en el servidor con pyttsx3) y la actual (Web Speech API en el navegador).
    • Dejamos clara la arquitectura y las tecnologías usadas.
    • Resaltamos que ya no se requiere instalar librerías TTS en el contenedor Docker, pues todo se hace en el cliente.

    Colaboradores

    Para cualquier duda o sugerencia, contáctame en: nicolasceballosbrito@gmail.com 🙂

    ¡Gracias por probar el Chatbot de Salud Mental!
    Si deseas contribuir, siéntete libre de hacer un fork y enviar tus pull requests.

    Visit original content creator repository https://github.com/Nico2603/ChatBot-MentalHealth
  • opaque

    OPAQUE

    OPAQUE Go Reference codecov

      import "github.com/bytemare/opaque"
    

    This package implements OPAQUE, an asymmetric password-authenticated key exchange protocol that is secure against pre-computation attacks. It enables a client to authenticate to a server without ever revealing its password to the server.

    This implementation is developed by one of the authors of the RFC Internet Draft. The main branch is in sync with the latest developments of the draft, and the releases correspond to the official draft versions.

    What is OPAQUE?

    OPAQUE is an aPAKE that is secure against pre-computation attacks. OPAQUE provides forward secrecy with respect to password leakage while also hiding the password from the server, even during password registration. OPAQUE allows applications to increase the difficulty of offline dictionary attacks via iterated hashing or other key stretching schemes. OPAQUE is also extensible, allowing clients to safely store and retrieve arbitrary application data on servers using only their password.

    References

    Documentation Go Reference

    You can find the documentation and usage examples in the package doc and the project wiki .

    Versioning

    SemVer is used for versioning. For the versions available, see the tags on the repository.

    Minor v0.x versions match the corresponding CFRG draft version, the master branch implements the latest changes of the draft development.

    Contributing

    Please read CONTRIBUTING.md for details on the code of conduct, and the process for submitting pull requests.

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Visit original content creator repository https://github.com/bytemare/opaque
  • NeuroFCW

    🚀 NeuroFCW – Neural Network-Based FCW System

    Author – Aryan Pandey, Priyadarshi Uttpal and Sanket Poojary

    NeuroFCW is an advanced Forward Collision Warning (FCW) system powered by Generative AI, Neural Networks, and a Graph-RAG architecture. It automates code generation, test case creation, and continuous validation for FCW systems, ensuring robustness, precision, and real-world readiness.


    📊 System Architecture

    Below is the high-level system architecture overview for NeuroFCW:

    System Architecture


    🧠 Key Features

    1. Document Segmentation and Preprocessing

      • Breaks down input documents into manageable segments for efficient processing.
    2. Graph-RAG Knowledge Retrieval

      • Stores and retrieves safety standards, MISRA guidelines, test cases, and code examples using Neo4j Graph Knowledge Base.
    3. Graph-RAG Code Generation

      • Generates FCW code compliant with MISRA standards using:
        • Contextual data retrieval
        • Code generation with a Large Language Model (LLM) API (Llama-3.3-70b pre-trained LLM)
    4. Graph-RAG Test Case Generation

      • Automatically generates comprehensive test cases with relevant test patterns.
    5. Fine-Tuned YOLO Model

      • Handles object detection and computes parameters for FCW validation.
    6. Validation & Continuous Improvement

      • Failed test cases are logged and used to improve future FCW code.
    7. Performance Logging

      • Logs critical metrics such as detection accuracy, processing speed, and code validation success rates to ensure real-world deployment readiness.

    🛠 Tech Stack

    • AI/ML: YOLOv11, LangChain, OpenAI APIs
    • Databases: Neo4j, SQLite
    • Programming Languages: Python
    • DevOps Tools: Jenkins, GitHub Actions, CARLA Simulation
    • Frameworks: Large Language Models (LLMs), Graph-RAG

    📂 System Workflow

    The system operates in the following steps:

    1. Input Documents → Segmented into manageable chunks.
    2. Knowledge Base Processing → Neo4j Aura graph database retrieves relevant guidelines and safety standards.
    3. Code Generation → LLM APIs generate FCW code tailored to input requirements with MISRA compliance.
    4. Test Case Generation → Retrieves patterns and validates with YOLO.
    5. Validation → Test results logged, and the FCW system is updated continuously
    6. Perfromance metrics → Critical KPIs (accuracy, processing speed, and anomaly handling) are logged for analysis.

    Output

    • MISRA-Compliant FCW Code
    • Comprehensive Test Cases
    • Validated FCW Code Packages for Deployment

    📈 Future Scope

    • Multi-Sensor Fusion with LiDAR, Radar, and Camera Systems
    • Enhanced Fine-Tuning of LLMs for Anomaly Handling
    • Reinforcement Learning for ADAS Decision-Making
    • Energy-Efficient ML Models using lightweight and quantized ML models

    🤝 Contributors

    • Aryan Pandey
    • Priyadarshi Uttpal
    • Sanket Poojary

    🔗 How to Use

    1. Clone the repository:
      git clone https://github.com/ltd-ARYAN-pvt/NeuroFCW.git
    2. Install required dependencies:
      pip install -r requirements.txt
    3. Run the main program:
      python app.py

    🌟 Contact

    For more information, reach me at:


    Visit original content creator repository https://github.com/ltd-ARYAN-pvt/NeuroFCW
  • acmtc-redismq

    acmtc-redismq

    License Maven Central GitHub release ACMTC Author ACMTC QQ

    Introduction

    acmtc-redismq is an open source solution suite for easily using redis as MQ.

    Functions

    • redis producer
    • redis consumer listener
    • distributed deployment
    • errors handler

    Change Log

    • 1.0.0
      implementation function of RedisMQ
    • 1.0.1
      1、bug-fix:handling the redundant message when starting the service
      2、import the ThreadPoolTaskExecutor to control the concurrency multithreading of RedisMQ consumer

    How to use

    Environment

    • JDK 1.9+
    • Spring Boot 1.5+
    • Redis

    maven use

    • pom.xml
    	<dependency>
    	    <groupId>com.acmtc</groupId>
    	    <artifactId>acmtc-redismq</artifactId>
    	    <version>1.0.1-RELEASE</version>
    	</dependency>
    

    and use spring boot redis dependency as default.

    • application.yml
    redis-mq:
      maxErrorCount: 3                                # redisMQ consumer error count, greater than it will be discarded.
      config:
          corePoolSize: 10                            # Set the redisMQ consumer ThreadPoolExecutor's core pool size.
          maxPoolSize: 100                            # Set the redisMQ consumer ThreadPoolExecutor's maximum pool size.
          keepAliveSeconds: 120                       # Set the redisMQ consumer ThreadPoolExecutor's keep-alive seconds.
          queueCapacity: 2                            # Set the capacity for the redisMQ consumer ThreadPoolExecutor's BlockingQueue.
          allowCoreThreadTimeOut: false               # Specify whether to allow core threads to time out.
      consumer:
        topicMainSwitch: false                        # redisMQ customize whether all consuming listener will be opened,false for all opened,true for custuming below,default false
        switchList:                                   # redisMQ specific consumers customize
          - topic: channels                           # specific consumer name, same topic as annotation used in @RedisConsumerAnnotation
            topicSwitch: true                         # true for open listener, default false
          - topic: channel2
            topicSwitch: false
          - topic: apsToolsChannels
            topicSwitch: true
    

    and use spring boot redis as omission here.

    • Configuration
    @Configuration
    public class RedisConfig {
    	/**
    	 * 重写RedisAutoConfiguration当中生成RedisTemplate的方法,将泛型改为实际项目当中使用的类型,否则启动报错
    	 * @param redisConnectionFactory
    	 * @return
    	 * @throws UnknownHostException
    	 */
    	@Bean
    	@ConditionalOnMissingBean(name = "redisTemplate")
    	public RedisTemplate<?, ?> redisTemplate(
    			RedisConnectionFactory redisConnectionFactory)
    					throws UnknownHostException {
    		RedisTemplate<?, ?> template = new RedisTemplate<Serializable, Serializable>();
    		template.setConnectionFactory(redisConnectionFactory);
    		return template;
    	}
    	
    }
    

    generate redisTemplate.

    • startup
    @EntityScan("com.acmtc")
    @SpringBootApplication(scanBasePackages = {"com.acmtc"})
    

    ServerApplication will scan package “com.acmtc”

    • producer use
    redisProducer.sendChannelMessage("testchannel", message);
    

    please use message as JSONObject

    • consumer use
    @RedisConsumerAnnotation(topic = "testchannel")
    public class ConsumerExample  extends RedisConsumer {
        public void onMessage(JSONObject json) {
            log.info("测试Consumer,接收消息:" + json);
        }
    }
    

    source code

    Download code : https://github.com/ACMTC/acmtc-redismq.git

    Visit original content creator repository https://github.com/ACMTC/acmtc-redismq
  • artifact-grabber

    Maven 2 Artifact Grabber

    What is it?

    A standalone executable jar which uses Eclipse Aether libraries to download via HTTP(S) an artifact from a
    remote Maven 2 format repository.

    This is a modified version of a sonatype’s artifact-resolver tool app

    Build executable jar

    ./gradlew build
    

    How to use

    java -jar artifact-grabber.jar 
        --repository-url "http://remote-repository.com/public/" 
        --user user:pass // optional if authentication not required
        --output // directory download to, optional (default is current)
        --name // artifact new name, optional
        com.mypackage:artifact
    

    If version of the artifact is not specified then you will get latest version (including SNAPSHOT versions).
    To get latest non-SNAPSHOT version use pseudo version value ‘RELEASE’. Example: com.mypackage:artifact:RELEASE

    Using args file

    The script uses Groovy CliBuilder to process arguments.
    You can create a file and put arguments, like auth, in that file instead.

    Example:

    1. Create a file named script.args in the same directory as the jar
    2. The contents of the file can contain an argument on each line, like this:
        --repository-url "http://remote-repository.com/public/"
        --user user:pass   
        --name artifact.jar
        
    3. Use the special @ prefix and pass the file name as an argument to the script. Each line of the file will be read as if
      it was passed on the command line. Example:

       java -jar artifact-grabber.jar @script.args com.mypackage:artifact
       

    Visit original content creator repository
    https://github.com/sgornostal/artifact-grabber

  • puppet-mackerel_agent

    Puppet module for mackerel-agent

    Puppet Forge Dependency Status Build Status

    Table of Contents

    1. Overview – What is the mackerel_agent module?
    2. Setup – The basics of getting started
    3. Usage – How to use the module
    4. Limitations – OS compatibility, etc.
    5. Development – Guide for contributing to the module

    Overview

    This Puppet module install and configure mackerel-agent.

    Setup

    Install via Puppet Forge:

    $ puppet module install tomohiro-mackerel_agent

    Usage

    class { 'mackerel_agent':
      apikey              => 'Your API Key',
      roles               => ['service:web', 'service:database'],
      host_status         => {
        on_start => 'working',
        on_stop  => 'poweroff'
      },
      ignore_filesystems  => '/dev/ram.*',
      use_metrics_plugins => true,
      use_check_plugins   => true,
      metrics_plugins     => {
        apache2     => '/usr/local/bin/mackerel-plugin-apache2',
        php-opcache => '/usr/local/bin/mackerel-plugin-php-opcache'
      },
      check_plugins       => {
        access_log => '/usr/local/bin/check-log --file /var/log/access.log --pattern FATAL',
        check_cron => '/usr/local/bin/check-procs -p crond'
        check_ssh  => {
          command               => 'ruby /path/to/check-ssh.rb',
          notification_interval => '60',
          max_check_attempts    => '3',
          check_interval        => '5'
        }
      }
    }

    Hiera

    mackerel_agent::apikey: 'Your API Key'
    mackerel_agent::roles:
      - 'service:web'
      - 'service:database'
    mackerel_agent::host_status:
      on_start: working
      on_stop: poweroff
    mackerel_agent::ignore_filesystems: '/dev/ram.*'
    mackerel_agent::use_metrics_plugins: true
    mackerel_agent::use_check_plugins: true
    mackerel_agent::metrics_plugins:
      apache2: '/usr/local/bin/mackerel-plugin-apache2'
      php-opcache: '/usr/local/bin/mackerel-plugin-php-opcache'
    mackerel_agent::check_plugins:
      access_log: '/usr/local/bin/check-log --file /var/log/access.log --pattern FATAL'
      check_cron: '/usr/local/bin/check-procs -p crond'
      ssh:
        command: 'ruby /path/to/check-ssh.rb'
        notification_interval: '60'
        max_check_attempts: '3'
        check_interval: '5'

    Limitations

    These operation systems are supported.

    • RHEL 6
    • CentOS 6
    • Debian 7
    • Ubuntu 14.04

    The person who want to add an operating system to supported list should implement it himself.

    Development

    Requirements

    • Puppet 3.7 or later
    • librarian-puppet

    Setup development environments

    Install dependencies:

    $ bundle install --path vendor/bundle
    $ bundle exec librarian-puppet install

    You can run smoke tests:

    $ export MACKEREL_API_KEY="your api key" # Export a your mackerel API key
    $ vagrant up
    $ vagrant provision

    Testing

    Unit tests:

    $ bundle exec rake spec

    Acceptance tests:

    $ export DOCKER_HOST=tcp://your-docker-host-ip:port
    $ BEAKER_set=centos-6-x64 bundle exec rake beaker

    Contributing

    See CONTRIBUTING guideline.

    LICENSE

    © 2014 – 2016 Tomohiro TAIRA.

    This project is licensed under the Apache License, Version 2.0. See LICENSE for details.

    Visit original content creator repository https://github.com/tomohiro/puppet-mackerel_agent