Author: u1sbxhkjoqp2

  • docker-ddclient

    auto-update dockerhub Docker Pulls

    docker-ddclient

    Install ddclient into a Linux container

    ddclient

    Tags

    Several tags are available:

    Description

    DDclient is a Perl client used to update dynamic DNS entries for accounts on Dynamic DNS Network Service Provider. It has the capability to update more than just dyndns and it can fetch your WAN-ipaddress in a few different ways.

    https://sourceforge.net/p/ddclient/wiki/Home/

    Usage

    docker create --name=ddclient \
      -v <path to ddclient.conf>:/etc/ddclient/ddclient.conf \
      -e UID=<UID default:12345> \
      -e GID=<GID default:12345> \
      -e AUTOUPGRADE=<0|1 default:0> \
      -e TZ=<timezone default:Europe/Brussels> \
      -e DOCKMAIL=<mail address> \
      -e DOCKRELAY=<smtp relay> \
      -e DOCKMAILDOMAIN=<originating mail domain> \
      digrouz/ddclient
    

    Environment Variables

    When you start the ddclient image, you can adjust the configuration of the ddclient instance by passing one or more environment variables on the docker run command line.

    UID

    This variable is not mandatory and specifies the user id that will be set to run the application. It has default value 12345.

    GID

    This variable is not mandatory and specifies the group id that will be set to run the application. It has default value 12345.

    AUTOUPGRADE

    This variable is not mandatory and specifies if the container has to launch software update at startup or not. Valid values are 0 and 1. It has default value 0.

    TZ

    This variable is not mandatory and specifies the timezone to be configured within the container. It has default value Europe/Brussels.

    DOCKRELAY

    This variable is not mandatory and specifies the smtp relay that will be used to send email. Do not specify any if mail notifications are not required.

    DOCKMAIL

    This variable is not mandatory and specifies the mail that has to be used to send email. Do not specify any if mail notifications are not required.

    DOCKMAILDOMAIN

    This variable is not mandatory and specifies the address where the mail appears to come from for user authentication. Do not specify any if mail notifications are not required.

    Notes

    • This container is built using s6-overlay
    • The docker entrypoint can upgrade operating system at each startup. To enable this feature, just add -e AUTOUPGRADE=1 at container creation.
    • An helm chart is available of in the chart folder with an example value.yaml

    Issues

    If you encounter an issue please open a ticket at github

    Visit original content creator repository
  • dlist

    Difference Lists

    test-badge hackage-badge packdeps-badge

    List-like types supporting O(1) append and snoc operations.

    Installation

    dlist is a Haskell package available from Hackage. It can be installed with cabal or stack.

    See the change log for the changes in each version.

    Usage

    Here is an example of “flattening” a Tree into a list of the elements in its Leaf constructors:

    import qualified Data.DList as DList
    
    data Tree a = Leaf a | Branch (Tree a) (Tree a)
    
    flattenSlow :: Tree a -> [a]
    flattenSlow = go
      where
        go (Leaf x) = [x]
        go (Branch left right) = go left ++ go right
    
    flattenFast :: Tree a -> [a]
    flattenFast = DList.toList . go
      where
        go (Leaf x) = DList.singleton x
        go (Branch left right) = go left `DList.append` go right

    (The above code can be found in the benchmark.)

    flattenSlow is likely to be slower than flattenFast:

    1. flattenSlow uses ++ to concatenate lists, each of which is recursively constructed from the left and right Tree values in the Branch constructor.

    2. flattenFast does not use ++ but constructs a composition of functions, each of which is a “cons” introduced by DList.singleton ((x :)). The function DList.toList applies the composed function to [], constructing a list in the end.

    To see the difference between flattenSlow and flattenFast, consider some rough evaluations of the functions applied to a Tree:

    flattenSlow (Branch (Branch (Leaf 'a') (Leaf 'b')) (Leaf 'c'))
      = go (Branch (Branch (Leaf 'a') (Leaf 'b')) (Leaf 'c'))
      = go (Branch (Leaf 'a') (Leaf 'b')) ++ go (Leaf 'c')
      = (go (Leaf 'a') ++ go (Leaf 'b')) ++ "c"
      = ("a" ++ "b") ++ "c"
      = ('a' : [] ++ "b") ++ "c"
      = ('a' : "b") ++ "c"
      = 'a' : "b" ++ "c"
      = 'a' : 'b' : [] ++ "c"
      = 'a' : 'b' : "c"
    flattenFast (Branch (Branch (Leaf 'a') (Leaf 'b')) (Leaf 'c'))
      = toList $ go (Branch (Branch (Leaf 'a') (Leaf 'b')) (Leaf 'c'))
      = toList $ go (Branch (Leaf 'a') (Leaf 'b')) `append` go (Leaf 'c')
      = unsafeApplyDList (go (Branch (Leaf 'a') (Leaf 'b'))) . unsafeApplyDList (go (Leaf 'c')) $ []
      = unsafeApplyDList (go (Branch (Leaf 'a') (Leaf 'b'))) (unsafeApplyDList (go (Leaf 'c')) [])
      = unsafeApplyDList (go (Branch (Leaf 'a') (Leaf 'b'))) (unsafeApplyDList (singleton 'c') [])
      = unsafeApplyDList (go (Branch (Leaf 'a') (Leaf 'b'))) (unsafeApplyDList (UnsafeDList ((:) 'c')) [])
      = unsafeApplyDList (go (Branch (Leaf 'a') (Leaf 'b'))) "c"
      = unsafeApplyDList (UnsafeDList (unsafeApplyDList (go (Leaf 'a')) . unsafeApplyDList (go (Leaf 'b')))) "c"
      = unsafeApplyDList (go (Leaf 'a')) (unsafeApplyDList (go (Leaf 'b')) "c")
      = unsafeApplyDList (go (Leaf 'a')) (unsafeApplyDList (singleton 'b') "c")
      = unsafeApplyDList (go (Leaf 'a')) (unsafeApplyDList (UnsafeDList ((:) 'b')) "c")
      = unsafeApplyDList (go (Leaf 'a')) ('b' : "c")
      = unsafeApplyDList (singleton 'a') ('b' : "c")
      = unsafeApplyDList (UnsafeDList ((:) 'a')) ('b' : "c")
      = 'a' : 'b' : "c"

    The left-nested ++ in flattenSlow results in intermediate list constructions that are immediately discarded in the evaluation of the outermost ++. On the other hand, the evaluation of flattenFast involves no intermediate list construction but rather function applications and newtype constructor wrapping and unwrapping. This is where the efficiency comes from.

    Warning! Note that there is truth in the above, but there is also a lot of hand-waving and intrinsic complexity. For example, there may be GHC rewrite rules that apply to ++, which will change the actual evaluation. And, of course, strictness, laziness, and sharing all play a significant role. Also, not every function in the dlist package is the most efficient for every situation.

    Moral of the story: If you are using dlist to speed up your code, check to be sure that it actually does. Benchmark!

    Design Notes

    These are some notes on design and development choices made for the dlist package.

    Avoid ++

    The original intent of Hughes’ representation of lists as first-class functions was to provide an abstraction such that the list append operation found in functional programming languages (and now called ++ in Haskell) would not appear in left-nested positions to avoid duplicated structure as lists are constructed. The lesson learned by many people using list over the years is that the append operation can appear, sometimes surprisingly, in places they don’t expect it.

    One of our goals is for the dlist package to avoid surprising its users with unexpected insertions of ++. Towards this end, there should be a minimal set of functions in dlist in which ++ can be directly or indirectly found. The list of known uses of ++ includes:

    • DList: fromList, fromString, read
    • DNonEmpty: fromList, fromNonEmpty, fromString, read

    If any future requested functions involve ++ (e.g. via fromList), the burden of inclusion is higher than it would be otherwise.

    Abstraction

    The DList representation and its supporting functions (e.g. append, snoc, etc.) rely on an invariant to preserve its safe use. That is, without this invariant, a user may encounter unexpected outcomes.

    (We use safety in the sense that the semantics are well-defined and expected, not in the sense of side of referential transparency. The invariant does not directly lead to side effects in the dlist package, but a program that uses an unsafely generated DList may do something surprising.)

    The invariant is that, for any xs :: DList a:

    fromList (toList xs) = xs

    To see how this invariant can be broken, consider this example:

    xs :: DList a
    xs = UnsafeDList (const [])
    
    fromList (toList (xs `snoc` 1))
      = fromList (toList (UnsafeDList (const []) `snoc` 1))
      = fromList (toList (UnsafeDList (unsafeApplyDList (UnsafeDList (const [])) . (x :))))
      = fromList (toList (UnsafeDList (const [] . (x :))))
      = fromList (($ []) . unsafeApplyDList $ UnsafeDList (const [] . (x :)))
      = fromList (const [] . (x :) $ [])
      = fromList (const [] [x])
      = fromList []
      = UnsafeDList (++ [])

    The invariant can also be stated as:

    toList (fromList (toList xs)) = toList xs

    And we can restate the example as:

    toList (fromList (toList (xs `snoc` 1)))
      = toList (UnsafeDList (++ []))
      = []

    It would be rather unhelpful and surprising to find (xs `snoc` 1) turned out to be the empty list.

    To preserve the invariant on DList, we provide it as an abstract type in the Data.DList module. The constructor, UnsafeDList, and record label, unsafeApplyDList, are not exported because these can be used, as shown above, to break the invariant.

    All of that said, there have been numerous requests to export the DList constructor. We are not convinced that it is necessary, but we are convinced that users should decide for themselves.

    To use the constructor and record label of DList, you import them as follows:

    import Data.DList.Unsafe (DList(UnsafeDList, unsafeApplyDList))

    If you are using Safe Haskell, you may need to add this at the top of your module:

    {-# LANGUAGE Trustworthy #-}

    Just be aware that the burden of proof for safety is on you.

    References

    These are various references where you can learn more about difference lists.

    Research

    • A novel representation of lists and its application to the function “reverse.” John Hughes. Information Processing Letters. Volume 22, Issue 3. 1986-03. Pages 141-144. PDF

      This is the original published source for a representation of lists as first-class functions.

    Background

    Blogs and Mailing Lists

    Books

    License

    BSD 3-Clause “New” or “Revised” License © Don Stewart, Sean Leather, contributors

    Visit original content creator repository
  • AndroidMVVMExample

    Android MVVM with Single Activity sample app that uses kotlin coroutines flow.

    This is a sample app that uses kotlin coroutines flow , stateflow.

    This app uses agify REST service to get the age by name and county.

    It is MVVM with one activity Architecture using best practices of navigation component

    Libraries Used

    Screenshot

    Main Screenshot Main Select Lang Searched Data Result
    Searched History Edit Img Change Whole Img Gif

    Where To go From here

    • Marvel Api Android Components Architecture in a Modular Word is a sample project that presents modern, 2020 approach to Android application development using Kotlin and latest tech-stack.
    • A UI/Material Design sample. The interface of the app is deliberately kept simple to focus on architecture. Check out Plaid instead.
    • A complete Jetpack sample covering all libraries. Check out Android Sunflower or the advanced Github Browser Sample instead.
    • A real production app with network access, user authentication, etc. Check out the Google I/O app, Santa Tracker or Tivi for that.
    • Model-View-ViewModel (ie MVVM) is a template of a client application architecture MVVM
    • MarvelHeroes is a demo application based on modern Android application tech-stacks and MVVM architecture.Fetching data from the network and integrating persisted data in the database via repository pattern.
    • Clean Archetecture This is a sample app that is part of a blog post I have written about how to architect android application using the Uncle Bob’s clean architecture approach.
    • Idiomatic KotlinContains all the code presented in the Idiomatic Kotlin tutorial series.

    UseCase

    You can reference the good use cases of this library in the below repositories.

    • Pokedex – 🗡️ Android Pokedex using Hilt, Motion, Coroutines, Flow, Jetpack (Room, ViewModel, LiveData) based on MVVM architecture.
    • DisneyMotions – 🦁 A Disney app using transformation motions based on MVVM (ViewModel, Coroutines, LiveData, Room, Repository, Koin) architecture.
    • MarvelHeroes – ❤️ A sample Marvel heroes application based on MVVM (ViewModel, Coroutines, LiveData, Room, Repository, Koin) architecture.
    • TheMovies2 – 🎬 A demo project using The Movie DB based on Kotlin MVVM architecture and material design & animations.
    • ForUiRef -A curated list of awesome Android UI/UX libraries.
    • AndroidUtilsSample Android Utils app contain simple code for starting a app
    • Android Sample Animation Simple animation sample project.
    • Backend Apis List

    • List Of Open ApisThis repo is a collection of AWESOME APIs for developers. Feel free to Star and Fork. Any comments, suggestions? Let us know. we love PRs :), please follow the awesome list.
    Visit original content creator repository
  • CodableWrapper

    Requirements

    Xcode Minimun Deployments Version
    Xcode15+ iOS13+ / macOS11+ 1.0+
    Xcode15- iOS13- / macOS11- 0.3.3

    About

    The project objective is to enhance the usage experience of the Codable protocol using the macro provided by Swift 5.9 and to address the shortcomings of various official versions.

    Feature

    • Default value
    • Basic type automatic convertible, between String Bool Number etc.
    • Custom multiple CodingKey
    • Nested Dictionary CodingKey
    • Automatic compatibility between camel case and snake case
    • Convenience Codable subclass
    • Transformer

    Installation

    CocoaPods

    pod 'CodableWrapper', :git => 'https://github.com/winddpan/CodableWrapper.git'

    Swift Package Manager

    https://github.com/winddpan/CodableWrapper

    Example

    @Codable
    struct BasicModel {
        var defaultVal: String = "hello world"
        var defaultVal2: String = Bool.random() ? "hello world" : ""
        let strict: String
        let noStrict: String?
        let autoConvert: Int?
    
        @CodingKey("hello")
        var hi: String = "there"
    
        @CodingNestedKey("nested.hi")
        @CodingTransformer(StringPrefixTransform("HELLO -> "))
        var codingKeySupport: String
    
        @CodingNestedKey("nested.b")
        var nestedB: String
    
        var testGetter: String {
            nestedB
        }
    }
    
    final class CodableWrapperTests: XCTestCase {
        func testBasicUsage() throws {
            let jsonStr = """
            {
                "strict": "value of strict",
                "autoConvert": "998",
                "nested": {
                    "hi": "nested there",
                    "b": "b value"
                }
            }
            """
    
            let model = try JSONDecoder().decode(BasicModel.self, from: jsonStr.data(using: .utf8)!)
            XCTAssertEqual(model.defaultVal, "hello world")
            XCTAssertEqual(model.strict, "value of strict")
            XCTAssertEqual(model.noStrict, nil)
            XCTAssertEqual(model.autoConvert, 998)
            XCTAssertEqual(model.hi, "there")
            XCTAssertEqual(model.codingKeySupport, "HELLO -> nested there")
            XCTAssertEqual(model.nestedB, "b value")
    
            let encoded = try JSONEncoder().encode(model)
            let dict = try JSONSerialization.jsonObject(with: encoded) as! [String: Any]
            XCTAssertEqual(model.defaultVal, dict["defaultVal"] as! String)
            XCTAssertEqual(model.strict, dict["strict"] as! String)
            XCTAssertNil(dict["noStrict"])
            XCTAssertEqual(model.autoConvert, dict["autoConvert"] as? Int)
            XCTAssertEqual(model.hi, dict["hello"] as! String)
            XCTAssertEqual("nested there", (dict["nested"] as! [String: Any])["hi"] as! String)
            XCTAssertEqual(model.nestedB, (dict["nested"] as! [String: Any])["b"] as! String)
        }
    }

    Macro usage

    @Codable

    • Auto conformance Codable protocol if not explicitly declared

      // both below works well
      
      @Codable
      struct BasicModel {}
      
      @Codable
      struct BasicModel: Codable {}
    • Default value

      @Codable
      struct TestModel {
          let name: String
          var balance: Double = 0
      }
      
      // { "name": "jhon" }
    • Basic type automatic convertible, between String Bool Number etc.

      @Codable
      struct TestModel {
          let autoConvert: Int?
      }
      
      // { "autoConvert": "998" }
    • Automatic compatibility between camel case and snake case

      @Codable
      struct TestModel {
          var userName: String = ""
      }
      
      // { "user_name": "jhon" }
    • Member Wise Init

      @Codable
      public struct TestModel {
          public var userName: String = ""
      
          // Automatic generated
          public init(userName: String = "") {
              self.userName = userName
          }
      }

      @Codable(wiseInit: false)
      public struct TestModel {
          public var userName: String = ""
      
          // Disable WiseInit Automatic generated
      }

    @CodingKey

    • Custom CodingKeys

      @Codable
      struct TestModel {
          @CodingKey("u1", "u2", "u9")
          var userName: String = ""
      }
      
      // { "u9": "jhon" }

    @CodingKeyIgnored

    • Ignored propery on encode and decode

      struct NonCodable {}
      
      @Codable
      struct TestModel {
          @CodingKeyIgnored
          var nonCodable: NonCodable?
      }

    @CodingNestedKey

    • Custom CodingKeys in nested dictionary

      @Codable
      struct TestModel {
          @CodingNestedKey("data.u1", "data.u2", "data.u9")
          var userName: String = ""
      }
      
      // { "data": {"u9": "jhon"} }

    @CodableSubclass

    • Automatic generate Codable class’s subclass init(from:) and encode(to:) super calls

      @Codable
      class BaseModel {
          let userName: String
      }
      
      @CodableSubclass
      class SubModel: BaseModel {
          let age: Int
      }
      
      // {"user_name": "jhon", "age": 22}

    @CodingTransformer

    • Transformer between in Codable / NonCodable model

      struct DateWrapper {
          let timestamp: TimeInterval
      
          var date: Date {
              Date(timeIntervalSince1970: timestamp)
          }
      
          init(timestamp: TimeInterval) {
              self.timestamp = timestamp
          }
      
          static var transformer = TransformOf<DateWrapper, TimeInterval>(fromJSON: { DateWrapper(timestamp: $0 ?? 0) }, toJSON: { $0.timestamp })
      }
      
      @Codable
      struct DateModel {
          @CodingTransformer(DateWrapper.transformer)
          var time: DateWrapper? = DateWrapper(timestamp: 0)
          
          @CodingTransformer(DateWrapper.transformer)
          var time1: DateWrapper = DateWrapper(timestamp: 0)
          
          @CodingTransformer(DateWrapper.transformer)
          var time2: DateWrapper?
      }
      
      class TransformTest: XCTestCase {
          func testDateModel() throws {
              let json = """
              {"time": 12345}
              """
      
              let model = try JSONDecoder().decode(DateModel.self, from: json.data(using: .utf8)!)
              XCTAssertEqual(model.time?.timestamp, 12345)
              XCTAssertEqual(model.time?.date.description, "1970-01-01 03:25:45 +0000")
      
              let encode = try JSONEncoder().encode(model)
              let jsonObject = try JSONSerialization.jsonObject(with: encode, options: []) as! [String: Any]
              XCTAssertEqual(jsonObject["time"] as! TimeInterval, 12345)
          }
      }

    Star History

    Star History Chart

    Visit original content creator repository

  • ux_tools

    Table of Contents

    UX – User Experience

    User Experience – UX occurs when the user comes into contact with a product.

    • How to listen to the user and extract needs?
    • How to create products that people need?

    Differences between UI and UX

    UI – User Interface UX – User Experience
    What the user finds and sees when they arrive at a website or app Focuses on what the user perceives of the website or app
    Everything that allows you to interact with the website or app, from buttons to forms Focuses on whether the content was useful, whether the navigation was enriching, and whether it was easily manageable and intuitive
    Ensures that the website or app looks good and works well on all platforms: mobile, web, tablets Includes target research, psychology, design, and marketing

    UX Tools

    UX tools are essential because you have got to have a wide range of options to help you create the things you need.

    UX Tools and Methods

    ux_tools_by_mafda

    1. Product strategy

    How to discover and create what people need.

    2. Ideas Generation

    How to externalize and communicate what you have in mind.

    3. Planning and Development

    How to execute good ideas.

    4. Validation and Research

    How to evaluate the solution of the problems and improve the product.

    5. Interface Design

    How to transform ideas into sketch, prototypes, and products. Usability and utility.

    6. Success Metrics

    Objectively evaluate the results of the product.

    • KPI (Key Performance Metrics)
    • CTR (Click Through Rate)
    • NPS (Net Promoter Score)
    • DAU (Daily Active Users)
    • Churn Rate
    • LTV (Lifetime Value)
    • HEART (Happiness, Engagement, Adoption, Retention, Task Success)

    7. Launch MVP

    How to launch the MVP. Learn fast and succeed.

    More interesting tools

    (strategy)

    • UXpressia Visualize customer experience
      and collaborate with your team.
    • FlowMapp Full stack UX platform.
    • Strategyzer Creators of the Business Model Canvas.

    (ideas)

    • MoodBoard Build beautiful, simple, free moodboards.
    • Miro Be Creative. Be Productive. From Anywhere.
    • overflow Create interactive user flow diagrams that tell a story.

    (planning)

    • Trello Keep track of everything.
    • Asana Manage your team’s work, projects, & tasks online.
    • Craft Build intuitive Roadmaps, prioritize features, connect them with your dev teams.

    (validation)

    • UXArmy Online usability testing platform.
    • UserTesting Leader in user research and software testing.
    • Unbonce Design Beautiful Landing Pages.
    • Klickpages Tool for create landing pages.
    • OptimalSort Discover how people categorize information.
    • Optimizely Best-known tool for A/B testing.
    • SurveyMonkey Send and evaluate surveys quickly and easily.

    (metrics)

    • UserZoom Brands can test and measure UX on websites, apps, and prototypes.
    • Delighted Measure and evaluate qualitative metrics.
    • Hotjar Website Heatmaps & Behavior Analytics Tools.
    • Google Analytics Measure ans track your sites and applications.

    (design and launch)

    • Axure Powerful Prototyping and Developer.
    • Marvel Rapid prototyping, testing and handoff.
    • inVision Create rich interactive prototypes.
    • Framer Best prototyping tool for teams.
    • Flinto Create interactive and animated prototypes.
    • Principle Design animated and interactive user interfaces.
    • JustInMind From wireframes to highly interactive prototypes.

    More interesting links

    UX checklist

    Other repositories

    Reference


    made with 💙 by mafda

    Visit original content creator repository

  • Pewlett_Hackard_Analysis

    Pewlett_Hackard_Analysis

    Project Overview

    I’ve been working with Bobby from Pewlett Hackard to analyze employee data, breaking down employee names, departments, titles, and tenure. Bobby’s manager has given both of you two more assignments: determine the number of retiring employees per title, and identify employees who are eligible to participate in a mentorship program. I have provided a written report that summarizes the analysis to help prepare Bobby’s manager for the “silver tsunami” as many current employees reach retirement age.

    Resourses

    -Data Source: departments.csv, employees.csv, dept_emp.csv, dept_manager.csv, salaries.csv, titles.csv -Software: PostgreSQL, pgAdmin, Python, Visual Studio Code, Git Bash

    Analysis of Data

    For this analysis I had to use Database Keys to establish the relationship between multiple tables, for this I focused on the Primary and Foreign Keys. I used an ERD (Entity Relationship Diagram) to help with highlighting the relationships between each of the tables. Once each of the tables were joined together, I was then able to create the tables needed to help show the retiring employees by title and identify the employees eligible to participate in the mentorship program.

    Results of Data

    Deliverable 1: The Number of Retiring Employees by Title

    Retirement_Titles

    These results show that 133,776 employees that could retire and also provides their title. See the breakdown by title below: 32,452 Staff 29,415 Senior Engineer 14,221 Engineer 8,047 Senior Staff 4,502 Technique Leader 1,761 Assistant Engineer

    Deliverable 2: The Employees Eligible for the Mentorship Program

    Mentorship_Eligibilty

    There were a total of 1,940 employees eligible for the mentorship program.

    Summary

    After analyzing the data to provide for Bobby to present to his managers, I believe this information will help with forcasting how many retirees to prepare for. They will also be able to predict and plan a hiring campaign from this data to fill the turnover from retirements.

    Visit original content creator repository
  • wth

    WTH – What the Hash?

    WTH Web Interface

    PLEASE NOTE: This is beta software and may not work as intended. Please file issues if you find something broken!

    • This is a new release of software. It will have bugs.
    • I mostly mine ETH, so mining other coins may or may not cause problems. Report any you find, please.

    What is “What the Hash?”?

    WTH is a “consolidator”, that gathers data from different APIs, such as those on miners and pools, to bring it all together into one interface (console, web, and/or API).

    See: More Feature Screenshots of Modules

    WTH was designed with the goal of providing an expandable, quick health status / earnings viewer for cryptocurrency related interests, miners, etc.

    It was originally developed to allow me to get a fast view on the health of all my GPU/CPU miners, regardless of miner software or pool software. Mostly, I was frustrated at looking at half a dozen or more web pages just to check in on miners, pools, and portfolios.

    It isn’t meant to compete with fancy web UIs with charts and graphs (yet), but can easily run alongside those. I have found that I rely less and less on the remote pool web interfaces to give me updates in addition to things like the portfolio not requiring me to share my holdings with external websites.

    With very little interaction, you should be able to see the basics of your cryptocurrency world. Adding more mining pools, staking & liquidity pools, crypto portfolios, and more is the plan.

    WTH also offers an API for other systems to use the collected data. The primary goal of this is so we can offer a more advanced Web UI in the future, but it also tries to serve as a single API protocol for many different miners and pools out in the wild.

    You can help us and add your own modules as well! The coding required can be fairly simplistic, depending on the remote API, and help from us can get your module added quickly. Don’t program? You can request the new module, but those who donate get the most attention (see donation addresses below). Requests can go here: Ideas

    WTH should be considered beta software. I wrote it as a quick tool for myself, then it proved so helpful, I started to grow it, and then I decided to release it. Contributions to the code base are welcome, but only do so if you understand that this software is beta and things will change.

    Join the new discord here

    By default, WTH offers the following modes:

    Installation – Windows

    • Run the following to install automatically: #> .\install_win.ps1
    • Copy wth_config_example.yml to wth_config.yml
    • Edit config file (see Configuration)
    • #> .\wth.rb (or double click in file window)
      • If .\wth.rb doesn’t work, try ‘ruby .\wth.rb’

    Manual installation can be done as:

    Installation – Ubuntu 20.04/21.10

    • Coming soon!
      • Install script for linux
      • Install for ARM
    • Install Ruby dependencies
      • sudo apt install curl g++ gcc autoconf automake bison libc6-dev libffi-dev libgdbm-dev libncurses5-dev libsqlite3-dev libtool libyaml-dev make pkg-config sqlite3 zlib1g-dev libgmp-dev libreadline-dev libssl-dev
    • Make sure your system is up-to-date (we’ll be doing this a lot)
      • $ sudo apt-get update -y && sudo apt-get upgrade -y
    • Install ruby 2.7+ for Ubuntu
      • $ sudo apt install ruby-full
    • Update your system
      • $ sudo apt-get update -y && sudo apt-get upgrade -y
    • Confirm Ruby 2.7+:
      • $ ruby –version
    • Download tar.gz from releases https://github.com/roboyeti/wth/releases/
      • create a folder in /home called ‘wth’
      • copy release tar.gz to /home/wth
      • cd /home/wth
      • extract release tar.gz to /home/wth with your favorite app or
    • Extract from Terminal:
      • $ tar -xf release.tar.gz (don’t forget to replace release.tar.gz with actual filename)
    • cd to /home/wth
    • Install Ruby Bundler
      • $ sudo apt install ruby-bundler
    • Update your system
      • $ sudo apt-get update -y && sudo apt-get upgrade -y
    • $ bundle install
    • Copy wth_config_example.yml to wth_config.yml
    • Edit config file (see Configuration)
    • #> .\wth.rb
      • If .\wth.rb doesn’t work, try ‘ruby .\wth.rb’

    Installation – OSx

    • Same as linux?, unknown mileage

    Use

    • All modules can be spread across 10 “pages” for display and keys 1-0 bring you to the page.
    • “e” key will show available commands in basic web interface and console
    • http://localhost:8080/api?module=list provides a list of configured modules that can be queried.
    • With either web interface (basic or API), you can enable a private key to restrict access
      • Enable in config and set your key
      • Add &key=<your_key> to the URL for both interfaces to send it with request.

    Other stuff

    • wthlab.rb is an interactive shell with a WTH application spun up with your config.
    • wthd.rb is an untested daemonized wth for OSs that support fork.
      • Use: ruby ./wthd.rb [start|stop|status|restart]
    • To detach from console, you can also set config option “console_out” to false
    • When a URL is visible on the console, you may be able to CTRL + Mouse click it to open in browser. Terminal and OS mileage may vary.

    Configuration

    • The default config file is “wth_config.yml”
    • Example config file is “wth_config_example.yml”
    • You can run with different config file using arguments to wth: -c or –config
      • Example: ruby wth.rb -c wth_my_other_config.yml

    Configuration – Modules

    • Modules are interfaces to software installed on your mining machines or remote APIs. You may have to install software yourself on one or more machines to get the features of a module.
    • Specific configuration options can be found in the example config.
    • Brief documentation for how to enable APIs for a specific module target can be found in docs/modules/<target_name>.

    List of supported modules and the config “api” entry for them:

    GPU Miners

    • Excavator (Nicehash Nvidia Miner) = “nice_hash”
    • Claymore Miner = “claymore” (untested)
    • Phoenix Miner = “phoenix”
    • T-Rex Miner = “t_rex_unm”
    • GMiner = “g_miner”
    • LolMiner = “lol_miner”
    • NanoMiner = “nano_miner”
    • NBMiner = “nbminer”

    CPU Miners

    • XMRig = “xmrig”
    • Cpuminer-gr = “raptoreum”
    • Cpuminer- = “cpuminer” (Untested other than cpuminer-gr)

    Harddrive Miners

    • Signum Miner (via pool API) = “signum_pool_miner”

    Pools

    • 2Miners = “2miners_pool”
    • Nano Pool = “nano_pool”
    • Signum Pool API = “signum_pool_view”
    • Flock Pool (RTM) = “flock_pool”
    • Unmineable (Address API) = “unmineable” (best with local Tor installation for socks poxying)

    Tokens

    • Signum Tokens = “signum_tokens”
    • ZapperFi = “zapper_fi” – Includes ETH tokens, Avalanche, and more. See http://zapper.fi

    Portfolio

    • Coingecko = “coin_gecko” – Build your own personal portfolio without sharing your data. Pricing is possible on any coin CoinGecko supports. See http://coingecko.com

    Hardware

    • LibreHardwareMonitor +WMI GPU/CPU monitoring on Win32 = “ohm_gpu_w32” (Experimental) — Comaptibility with OpenHardwareMonitor possible, but untested. (Experimental)
    • Nvidia SMI Remote (https://github.com/lampaa/nvidia-smi-rest) = “smi_rest”

    Misc Modules

    • WTH can pull data from another WTH instance = “wth_link”
    • Banner = “banner” – Inserts a banner of text by position in config and page#

    Configuration – Plugins

    • Specific configuration options can be found in the example config.

    List of supported plugins

    • what_to_mine : Enables what to mine revenue calculations on supporting modules
    • coin_gecko : Enables value calculations for modules to convert to USD (more currencies supported soon)

    Configuration – Global

    console_out: [true|false] = Enable console output.

    web_server_start: [true|false] = Run web server or not

    default_module_frequency: [integer] = Number of seconds between default module check. Override per module with “every:” directive. Some modules have minimums enforced to ensure you don’t get yourself banned or overload remote APIs that are generously provided by others for free.

    Configuration – Web Server

    The following web server config options are:

    web_server:

    html_out: [true|false] = Enable the console => html conversion. Turning this off will leave the API running, if that is enabled. True default.

    port: [integer] = Port number to run basic and API on. Default is 8080

    host: [network_addr] = For local machine access, set to 127.0.0.1 or localhost, 0.0.0.0 for all interfaces (default), or specific IP address for a specific interface.

    ssl: [true|false] = Enables SSL. Your SSL cert and pkey pem files will be generated for you and stored in “data/ssl/*.pem”. You can replace those with your own if you desire.

    api: [true|false] = Enable the API interface for the web server. Default false.

    key: [string] = User chosen string to act as you private web access string. Append all URL requests with &api_key=<your_key> if you set this.

    Configuration – Misc Notes

    • Tor SOCKS and Http Proxy is available, but currently is enabled per module with no global mechanism to set it yet and not all modules support it (those who use custom network code: claymore, phoenix, cpuminer, zapper.fi).

    Donate!

    Donations are very welcome and if you find this program helpful. If you want a miner, pool, or other crypto currency related site/tool integrated, donations also go a long way to convince me to investigate if it is possible and spend the personal time adding something I don’t need myself.

    • BTC: bc1qwnuxek3zw6cht7gqm07smr7pam8qngl9l72jqk
    • ETH: 0x0c3154E8bFB49Fc54e675f4D230737B76cAc8346
    • ETC: 0x0b719bd9AD3786D340ea0D13465CB7EDe20c7DF5
    • SIGNA: S-CJFF-3JYH-GMBY-D2DRX
    • RTM: RCMPMeSS2CYSbepTEbR5X3dNpwDQFZxnHM
    • XMR: 85AUKf2jByxRy884ebLagvToXmTW4hYmrhQUxvudKsvwWKpdKt1xMatargMD4DQTCCZgoxtiyrz6RTUXeciKGdz8Vqd9Ly8

    Example Web View

    Example Example

    Example Console View

    Example Example

    Visit original content creator repository
  • pic18f56q71-cnano-adccc-context-switching-mplab-mcc

    MCHP

    Analog-to-Digital Converter with Computation (ADCC) and Context Switching — Context Switching Using PIC18F56Q71 Microcontroller with MCC Melody

    This code example demonstrates how to perform two Analog-to-Digital Converter with Computation (ADCC) and Context Switching conversions from two input channels that have different peripheral configurations by using the Context Switching feature. The ADCC with Context Switching supports up to four configuration contexts and offers the option of switching between these contexts at runtime. By using this feature, a single ADCC with Context Switching peripheral is used to capture data from multiple analog input channels, each one of them having its own configuration. The conversion results are processed and displayed on a terminal software by using serial communication via an UART peripheral.

    Related Documentation

    More details and code examples on the PIC18F56Q71 can be found at the following links:

    Software Used

    Hardware Used

    • The PIC18F56Q71 Curiosity Nano Development board is used as a test platform:

    • Curiosity Nano Adapter:

    • POT 3 CLICK Board:


    Operation

    To program the Curiosity Nano board with this MPLAB® X project, follow the steps provided in the How to Program the Curiosity Nano Board chapter.

    Setup

    The following configurations must be made for this project:

    • The system clock is configured at 64 MHz – ADCRC Oscillator enabled
    • ADCC with Context Switching:
      • Input Configuration: Single-Ended mode
      • Result Format: Right justified
      • VDD: 3.3 V
      • Clock Selection: ADCRC
      • Enable Context 1:
        • Positive Channel Selection: ANA1
        • Positive Voltage Reference: VDD
        • Operating Mode Selection: Average mode
        • Error Calculation Mode: First derivative of single measurement
      • Enable Context 2:
        • Positive Channel Selection: ANA2
        • Positive Voltage Reference: VDD
        • Operating Mode Selection: Basic mode
        • Error Calculation Mode: First derivative of single measurement
    • UART2:
      • 115200 baud rate
      • 8 data bits
      • No parity bit
      • 1 Stop bit
    • UART2PLIB:
      • Redirect STDIO to UART: enabled
      • Enable Receive: enabled
      • Enable Transmit: enabled
    Pin Configuration
    RA1 Analog input
    RA2 Analog input
    RB4 Digital output
    RB5 Digital input

    Back to Top


    Demo

    In this example, the ADC reads data from the two potentiometers using context switching and displays the result on the serial terminal.



    Back to Top

    Summary

    This example shows how to configure the ADCC with Context Switching using the MPLAB® Code Configurator. Also, it demonstrates the use of context switching for acquiring data from multiple analog inputs.

    Back to Top

    How to Program the Curiosity Nano Board

    This chapter demonstrates how to use the MPLAB® X IDE to program an PIC® device with an Example_Project.X. This can be applied to any other project.

    1. Connect the board to the PC.

    2. Open the Example_Project.X project in MPLAB® X IDE.

    3. Set the Example_Project.X project as main project.
      Right click the project in the Projects tab and click Set as Main Project.

    4. Clean and build the Example_Project.X project.
      Right click the Example_Project.X project and select Clean and Build.

    5. Select PICxxxxx Curiosity Nano in the Connected Hardware Tool section of the project settings:
      Right click the project and click Properties.
      Click the arrow under the Connected Hardware Tool.
      Select PICxxxxx Curiosity Nano (click the SN), click Apply and then click OK.

    6. Program the project to the board.
      Right click the project and click Make and Program Device.


    Visit original content creator repository
  • argocd-trivy-extension

    argocd-trivy-extension

    Argo CD UI extension that displays vulnerability report data from Trivy, an open source security scanner.

    Trivy creates a vulnerability report Kubernetes resource with the results of a security scan. The UI extension then parses the report data and displays it as a grid and dashboard viewable in Pod resources within the Argo CD UI.

    vulnerabilities dashboard

    Prerequisites

    Install UI extension

    The UI extension needs to be installed by mounting the React component in Argo CD API server. This process can be automated by using the argocd-extension-installer. This installation method will run an init container that will download, extract and place the file in the correct location.

    Helm

    To install the UI extension with the Argo CD Helm chart add the following to the values file:

    server:
      extensions:
        enabled: true
        extensionList:
          - name: extension-trivy
            env:
              # URLs used in example are for the latest release, replace with the desired version if needed
              - name: EXTENSION_URL
                value: https://github.com/mziyabo/argocd-trivy-extension/releases/latest/download/extension-trivy.tar
              - name: EXTENSION_CHECKSUM_URL
                value: https://github.com/mziyabo/argocd-trivy-extension/releases/latest/download/extension-trivy_checksums.txt

    Kustomize

    Alternatively, the yaml file below can be used as an example of how to define a kustomize patch to install this UI extension:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: argocd-server
    spec:
      template:
        spec:
          initContainers:
            - name: extension-trivy
              image: quay.io/argoprojlabs/argocd-extension-installer:v0.0.1
              env:
              # URLs used in example are for the latest release, replace with the desired version if needed
              - name: EXTENSION_URL
                value: https://github.com/mziyabo/argocd-trivy-extension/releases/latest/download/extension-trivy.tar
              - name: EXTENSION_CHECKSUM_URL
                value: https://github.com/mziyabo/argocd-trivy-extension/releases/latest/download/extension-trivy_checksums.txt
              volumeMounts:
                - name: extensions
                  mountPath: /tmp/extensions/
              securityContext:
                runAsUser: 1000
                allowPrivilegeEscalation: false
          containers:
            - name: argocd-server
              volumeMounts:
                - name: extensions
                  mountPath: /tmp/extensions/
          volumes:
            - name: extensions
              emptyDir: {}

    Release Notes

    WIP, contributions welcome

    License

    Apache-2.0

    Visit original content creator repository
  • iTransformer

    iTransformer

    Implementation of iTransformer – SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant group

    All that remains is tabular data (xgboost still champion here) before one can truly declare “Attention is all you need”

    In before Apple gets the authors to change the name.

    The official implementation has been released here!

    Appreciation

    • StabilityAI and 🤗 Huggingface for the generous sponsorship, as well as my other sponsors, for affording me the independence to open source current artificial intelligence techniques.

    • Greg DeVos for sharing experiments he ran on iTransformer and some of the improvised variants

    Install

    $ pip install iTransformer

    Usage

    import torch
    from iTransformer import iTransformer
    
    # using solar energy settings
    
    model = iTransformer(
        num_variates = 137,
        lookback_len = 96,                  # or the lookback length in the paper
        dim = 256,                          # model dimensions
        depth = 6,                          # depth
        heads = 8,                          # attention heads
        dim_head = 64,                      # head dimension
        pred_length = (12, 24, 36, 48),     # can be one prediction, or many
        num_tokens_per_variate = 1,         # experimental setting that projects each variate to more than one token. the idea is that the network can learn to divide up into time tokens for more granular attention across time. thanks to flash attention, you should be able to accommodate long sequence lengths just fine
        use_reversible_instance_norm = True # use reversible instance normalization, proposed here https://openreview.net/forum?id=cGDAkQo1C0p . may be redundant given the layernorms within iTransformer (and whatever else attention learns emergently on the first layer, prior to the first layernorm). if i come across some time, i'll gather up all the statistics across variates, project them, and condition the transformer a bit further. that makes more sense
    )
    
    time_series = torch.randn(2, 96, 137)  # (batch, lookback len, variates)
    
    preds = model(time_series)
    
    # preds -> Dict[int, Tensor[batch, pred_length, variate]]
    #       -> (12: (2, 12, 137), 24: (2, 24, 137), 36: (2, 36, 137), 48: (2, 48, 137))

    For an improvised version that does granular attention across time tokens (as well as the original per-variate tokens), just import iTransformer2D and set the additional num_time_tokens

    Update: It works! Thanks goes out to Greg DeVos for running the experiment here!

    Update 2: Got an email. Yes you are free to write a paper on this, if the architecture holds up for your problem. I have no skin in the game

    import torch
    from iTransformer import iTransformer2D
    
    # using solar energy settings
    
    model = iTransformer2D(
        num_variates = 137,
        num_time_tokens = 16,               # number of time tokens (patch size will be (look back length // num_time_tokens))
        lookback_len = 96,                  # the lookback length in the paper
        dim = 256,                          # model dimensions
        depth = 6,                          # depth
        heads = 8,                          # attention heads
        dim_head = 64,                      # head dimension
        pred_length = (12, 24, 36, 48),     # can be one prediction, or many
        use_reversible_instance_norm = True # use reversible instance normalization
    )
    
    time_series = torch.randn(2, 96, 137)  # (batch, lookback len, variates)
    
    preds = model(time_series)
    
    # preds -> Dict[int, Tensor[batch, pred_length, variate]]
    #       -> (12: (2, 12, 137), 24: (2, 24, 137), 36: (2, 36, 137), 48: (2, 48, 137))

    Experimental

    iTransformer with fourier tokens

    A iTransformer but also with fourier tokens (FFT of time series is projected into tokens of their own and attended along side with the variate tokens, spliced out at the end)

    import torch
    from iTransformer import iTransformerFFT
    
    # using solar energy settings
    
    model = iTransformerFFT(
        num_variates = 137,
        lookback_len = 96,                  # or the lookback length in the paper
        dim = 256,                          # model dimensions
        depth = 6,                          # depth
        heads = 8,                          # attention heads
        dim_head = 64,                      # head dimension
        pred_length = (12, 24, 36, 48),     # can be one prediction, or many
        num_tokens_per_variate = 1,         # experimental setting that projects each variate to more than one token. the idea is that the network can learn to divide up into time tokens for more granular attention across time. thanks to flash attention, you should be able to accommodate long sequence lengths just fine
        use_reversible_instance_norm = True # use reversible instance normalization, proposed here https://openreview.net/forum?id=cGDAkQo1C0p . may be redundant given the layernorms within iTransformer (and whatever else attention learns emergently on the first layer, prior to the first layernorm). if i come across some time, i'll gather up all the statistics across variates, project them, and condition the transformer a bit further. that makes more sense
    )
    
    time_series = torch.randn(2, 96, 137)  # (batch, lookback len, variates)
    
    preds = model(time_series)
    
    # preds -> Dict[int, Tensor[batch, pred_length, variate]]
    #       -> (12: (2, 12, 137), 24: (2, 24, 137), 36: (2, 36, 137), 48: (2, 48, 137))

    Todo

    • beef up the transformer with latest findings
    • improvise a 2d version across both variates and time
    • improvise a version that includes fft tokens
    • improvise a variant that uses adaptive normalization conditioned on statistics across all variates

    Citation

    @misc{liu2023itransformer,
      title   = {iTransformer: Inverted Transformers Are Effective for Time Series Forecasting}, 
      author  = {Yong Liu and Tengge Hu and Haoran Zhang and Haixu Wu and Shiyu Wang and Lintao Ma and Mingsheng Long},
      year    = {2023},
      eprint  = {2310.06625},
      archivePrefix = {arXiv},
      primaryClass = {cs.LG}
    }

    @misc{shazeer2020glu,
        title   = {GLU Variants Improve Transformer},
        author  = {Noam Shazeer},
        year    = {2020},
        url     = {https://arxiv.org/abs/2002.05202}
    }

    @misc{burtsev2020memory,
        title   = {Memory Transformer},
        author  = {Mikhail S. Burtsev and Grigory V. Sapunov},
        year    = {2020},
        eprint  = {2006.11527},
        archivePrefix = {arXiv},
        primaryClass = {cs.CL}
    }

    @inproceedings{Darcet2023VisionTN,
        title   = {Vision Transformers Need Registers},
        author  = {Timoth'ee Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
        year    = {2023},
        url     = {https://api.semanticscholar.org/CorpusID:263134283}
    }

    @inproceedings{dao2022flashattention,
        title   = {Flash{A}ttention: Fast and Memory-Efficient Exact Attention with {IO}-Awareness},
        author  = {Dao, Tri and Fu, Daniel Y. and Ermon, Stefano and Rudra, Atri and R{\'e}, Christopher},
        booktitle = {Advances in Neural Information Processing Systems},
        year    = {2022}
    }

    @Article{AlphaFold2021,
        author  = {Jumper, John and Evans, Richard and Pritzel, Alexander and Green, Tim and Figurnov, Michael and Ronneberger, Olaf and Tunyasuvunakool, Kathryn and Bates, Russ and {\v{Z}}{\'\i}dek, Augustin and Potapenko, Anna and Bridgland, Alex and Meyer, Clemens and Kohl, Simon A A and Ballard, Andrew J and Cowie, Andrew and Romera-Paredes, Bernardino and Nikolov, Stanislav and Jain, Rishub and Adler, Jonas and Back, Trevor and Petersen, Stig and Reiman, David and Clancy, Ellen and Zielinski, Michal and Steinegger, Martin and Pacholska, Michalina and Berghammer, Tamas and Bodenstein, Sebastian and Silver, David and Vinyals, Oriol and Senior, Andrew W and Kavukcuoglu, Koray and Kohli, Pushmeet and Hassabis, Demis},
        journal = {Nature},
        title   = {Highly accurate protein structure prediction with {AlphaFold}},
        year    = {2021},
        doi     = {10.1038/s41586-021-03819-2},
        note    = {(Accelerated article preview)},
    }

    @inproceedings{kim2022reversible,
        title   = {Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift},
        author  = {Taesung Kim and Jinhee Kim and Yunwon Tae and Cheonbok Park and Jang-Ho Choi and Jaegul Choo},
        booktitle = {International Conference on Learning Representations},
        year    = {2022},
        url     = {https://openreview.net/forum?id=cGDAkQo1C0p}
    }

    @inproceedings{Katsch2023GateLoopFD,
        title   = {GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling},
        author  = {Tobias Katsch},
        year    = {2023},
        url     = {https://api.semanticscholar.org/CorpusID:265018962}
    }

    @article{Zhou2024ValueRL,
        title   = {Value Residual Learning For Alleviating Attention Concentration In Transformers},
        author  = {Zhanchao Zhou and Tianyi Wu and Zhiyun Jiang and Zhenzhong Lan},
        journal = {ArXiv},
        year    = {2024},
        volume  = {abs/2410.17897},
        url     = {https://api.semanticscholar.org/CorpusID:273532030}
    }

    @article{Zhu2024HyperConnections,
        title   = {Hyper-Connections},
        author  = {Defa Zhu and Hongzhi Huang and Zihao Huang and Yutao Zeng and Yunyao Mao and Banggu Wu and Qiyang Min and Xun Zhou},
        journal = {ArXiv},
        year    = {2024},
        volume  = {abs/2409.19606},
        url     = {https://api.semanticscholar.org/CorpusID:272987528}
    }

    Visit original content creator repository