Blog

  • I2Bplus-tree

    ⚠️ This repository has been archived (02/08/2020). Further developments continue at https://github.com/most-inesctec/I2Bplus-tree! ⚠️

    Improved Interval B+ tree implementation (I2B+ tree)

    BCH compliance Build Status Coverage Status

    The Interval B+ tree (IB+ tree) is a valid-time indexing structure, first introduced by Bozkaya and Ozsoyoglu. This indexing structure appears as a time-efficient indexing structure for the management of valid-time/ intervals.

    In this repository, we present the Improved Interval B+ tree (I2B+ tree), an indexing structure based on the IB+ tree, but with minor structural changes to improve the performance of the deletion operation. For a more detailed analysis of the I2B+ tree, refer to the paper published in the CISTI’2020 Conference, available at IEEE.

    This structure performs all operations (insertion, search and deletion) with logarithmic performance (O (log n)). Documentation is available here.

    Insertion Range Search Deletion
    I var dataset a0 3 RS var dataset a0 3 D var dataset a0 3

    For an in-depth analysis of both the parameter tuning (such as the tree’s order or the time-splits alpha value) and the conclusions obtained from the performance analysis of the I2B+ tree, check the benchmarks folder.

    Usage

    To suit the I2BplusTree to your needs, implement a class that extends the FlatInterval class, defining the information that will be stored on leaves there. One might also want to override the equals method, thus allowing the incorporation of the extra information stored in the Intervals in comparisons.

    Acknowledgements

    This work was financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia within project PTDC/CCI-INF/32636/2017 (POCI-01-0145-FEDER-032636).

    This work is also part of MOST.

    Citation

    E. Carneiro, A. V. d. Carvalho and M. A. Oliveira, “I2B+tree: Interval B+ tree variant towards fast indexing of time-dependent data,” 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Sevilla, Spain, 2020, pp. 1-7, doi: 10.23919/CISTI49556.2020.9140897.

    Visit original content creator repository https://github.com/EdgarACarneiro/I2Bplus-tree
  • I2Bplus-tree

    ⚠️ This repository has been archived (02/08/2020). Further developments continue at https://github.com/most-inesctec/I2Bplus-tree! ⚠️

    Improved Interval B+ tree implementation (I2B+ tree)

    BCH compliance Build Status Coverage Status

    The Interval B+ tree (IB+ tree) is a valid-time indexing structure, first introduced by Bozkaya and Ozsoyoglu. This indexing structure appears as a time-efficient indexing structure for the management of valid-time/ intervals.

    In this repository, we present the Improved Interval B+ tree (I2B+ tree), an indexing structure based on the IB+ tree, but with minor structural changes to improve the performance of the deletion operation. For a more detailed analysis of the I2B+ tree, refer to the paper published in the CISTI’2020 Conference, available at IEEE.

    This structure performs all operations (insertion, search and deletion) with logarithmic performance (O (log n)). Documentation is available here.

    Insertion Range Search Deletion
    I var dataset a0 3 RS var dataset a0 3 D var dataset a0 3

    For an in-depth analysis of both the parameter tuning (such as the tree’s order or the time-splits alpha value) and the conclusions obtained from the performance analysis of the I2B+ tree, check the benchmarks folder.

    Usage

    To suit the I2BplusTree to your needs, implement a class that extends the FlatInterval class, defining the information that will be stored on leaves there. One might also want to override the equals method, thus allowing the incorporation of the extra information stored in the Intervals in comparisons.

    Acknowledgements

    This work was financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation – COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia within project PTDC/CCI-INF/32636/2017 (POCI-01-0145-FEDER-032636).

    This work is also part of MOST.

    Citation

    E. Carneiro, A. V. d. Carvalho and M. A. Oliveira, “I2B+tree: Interval B+ tree variant towards fast indexing of time-dependent data,” 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Sevilla, Spain, 2020, pp. 1-7, doi: 10.23919/CISTI49556.2020.9140897.

    Visit original content creator repository https://github.com/EdgarACarneiro/I2Bplus-tree
  • jekyll-stealthy-share

    jekyll-stealthy-share

    Build Status

    This is a Jekyll plugin that adds a Liquid tag to inject share buttons into your blog.

    The share buttons are HTML-only and trigger no JavaScript, so they won’t track your blog’s on behalf of Facebook, Twitter, Reddit or whoever else.

    The injected HTML and CSS is simple and easy to customize or extend.

    See it in action on https://netflux.io.

    Installation

    Add jekyll-stealthy-share to your blog’s Gemfile:

    group :jekyll_plugins do
      gem 'jekyll-stealthy-share', git: 'https://github.com/rfwatson/jekyll-stealthy-share.git'
    end

    And add it to your _config.yml:

    plugins:
      - jekyll-stealthy-share

    Usage

    Somewhere in your layout (for example _includes/head.html), include the share button CSS:

    {% stealthy_share_assets %}

    To inject the share buttons into your post, use this tag:

    {% stealthy_share_buttons %}

    Customizing/adding/removing buttons

    To re-order or remove buttons, you can pass arguments to the liquid tag. For example:

    {% stealthy_share_buttons: facebook, twitter, reddit %}

    It’s also possible to add new templates of your own. If a directory _includes/share_buttons exists in your site’s root folder, jekyll-stealthy-share will read templates from this location instead.

    See the _includes directory for an idea of the expected layout of each template. Additionally, you could choose to not include {% stealthy_share_assets %} and write your own custom CSS.

    TODO

    • Add more share button options
    • Make customization of buttons easier (YAML file format to define?)
    • Improve default styling
    • Write unit tests

    Contributions

    Welcome.

    Credits

    The share button SVG templates, colours and some styling are all from http://sharingbuttons.io/.

    License

    MIT

    Contact

    rfwatson via GitHub

    Visit original content creator repository https://github.com/rfwatson/jekyll-stealthy-share
  • animation_tools

    Animation Tools

    SDK version Supported platforms Supported SDKs

    Cover - Animation Tools

    GitHub License Pub Package Code Size Publisher

    Build Status Pull Requests Issues Pub Score

    The standardized command-line easy-to-use and well-tested Dart tool for processing animations in any formats Spine format. Feel free to use it in your awesome project.

    CodeFactor

    Share some ❤️ and star repo to support the Animation Tools.

    If you write an article about AnimationTools or any of these packages, let me know and I’ll post the URL of the article in the README 🤝

    🎞️ Supported Formats

    • Spine
    • You can add your own format

    🚀 Usage

    Run commands below into the terminal from the folder animation_tools/bin/. Recommended: Cmder.

    Work with Animation Folder

    Copy Animation Folder

    dart main.dart --source path/to/a --copy path/to/b

    Scale Animation Folder

    dart main.dart --source path/to/b --scale 0.75

    Working with Concrete Animation

    Move and Rename Animation

    dart main.dart --source path/to/b --move_animation 'idle idle_1'

    Remove Animation

    dart main.dart --source path/to/b --remove_animation 'idle'

    Leave Only Declared Animations

    dart main.dart --source path/to/b --leave_animations 'idle walk run shoot'

    🔬 Advanced Usage

    Commands can be written in one line. For example, copy and scale:

    dart main.dart --source path/to/a --copy path/to/b --scale 0.75

    All commands and notes you can look with command:

    dart main.dart --help

    🏗️ Project Structure

    • bin Entrypoint.
    • lib Source code.
    • test Unit tests with examples.

    👀 Example of Spine Files

    This is one of the animations of the real project. Source: test/data/owl.

    atlas

    owl.webp
    size: 1108, 836
    format: RGBA8888
    filter: Linear, Linear
    repeat: none
    owl_beak_1
      rotate: false
      xy: 2, 690
      size: 122, 143
      orig: 122, 143
      offset: 0, 0
      index: -1
    owl_beak_2
      rotate: false
      xy: 132, 9
      size: 86, 122
      orig: 86, 122
      offset: 0, 0
      index: -1
    owl_body
      rotate: false
      xy: 2, 100
      size: 756, 733
      orig: 756, 733
      offset: 0, 0
      index: -1
    owl_crest
      rotate: false
      xy: 895, 668
      size: 210, 162
      orig: 212, 164
      offset: 1, 1
      index: -1
    owl_eye_left_1
      rotate: false
      xy: 895, 417
      size: 157, 249
      orig: 161, 252
      offset: 2, 2
      index: -1
    owl_eye_left_2
      rotate: false
      xy: 519, 3
      size: 121, 120
      orig: 123, 122
      offset: 1, 1
      index: -1
    owl_eye_right_1
      rotate: false
      xy: 760, 194
      size: 191, 221
      orig: 195, 225
      offset: 2, 2
      index: -1
    owl_eye_right_2
      rotate: false
      xy: 953, 79
      size: 125, 127
      orig: 128, 129
      offset: 2, 1
      index: -1
    owl_finger_1
      rotate: false
      xy: 642, 70
      size: 121, 194
      orig: 121, 195
      offset: 0, 0
      index: -1
    owl_finger_2
      rotate: false
      xy: 2, 17
      size: 142, 182
      orig: 142, 182
      offset: 0, 0
      index: -1
    owl_finger_3
      rotate: false
      xy: 754, 2
      size: 99, 190
      orig: 99, 190
      offset: 0, 0
      index: -1
    owl_finger_4
      rotate: false
      xy: 953, 208
      size: 110, 207
      orig: 112, 207
      offset: 2, 0
      index: -1
    owl_wing_left
      rotate: false
      xy: 760, 496
      size: 133, 337
      orig: 136, 340
      offset: 2, 2
      index: -1
    

    json

    {
      "skeleton": {
        "hash": "waRFUj5162I",
        "spine": "3.7-from-3.8-from-4.0.56",
        "x": -2301.65,
        "y": 3650.58,
        "width": 804.96,
        "height": 923.46,
        "images": "../images 3.8.99/owl/",
        "audio": "../images 3.8.99/owl/"
      },
      "bones": [
        { "name": "root" },
        {
          "name": "bone",
          "parent": "root",
          "length": 692.89,
          "rotation": 115.85,
          "x": -1831.48,
          "y": 3812.56,
          "color": "ffc300ff"
        },
        {
          "name": "bone2",
          "parent": "bone",
          "x": 329.23,
          "y": -56.35,
          "color": "ffc300ff"
        },
        {
          "name": "bone3",
          "parent": "bone2",
          "x": -122.94,
          "y": -357.18,
          "color": "ffc300ff"
        },
        {
          "name": "bone4",
          "parent": "bone",
          "length": 52.78,
          "rotation": -0.78,
          "x": 64.78,
          "y": -215.32,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone5",
          "parent": "bone4",
          "length": 44.07,
          "rotation": -48.99,
          "x": 52.78,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone6",
          "parent": "bone5",
          "length": 49.69,
          "rotation": -81.63,
          "x": 44.07,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone7",
          "parent": "bone6",
          "length": 49.59,
          "rotation": -97.54,
          "x": 49.69,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone8",
          "parent": "bone",
          "length": 23.79,
          "rotation": -20.1,
          "x": 37.49,
          "y": -164.09,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone9",
          "parent": "bone8",
          "length": 41.63,
          "rotation": -46.06,
          "x": 23.79,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone10",
          "parent": "bone9",
          "length": 44.57,
          "rotation": -78,
          "x": 41.63,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone11",
          "parent": "bone10",
          "length": 56.43,
          "rotation": -100.43,
          "x": 59.7,
          "y": -2.07,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone12",
          "parent": "bone",
          "length": 40.68,
          "rotation": -86.78,
          "x": 98.27,
          "y": 130.38,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone13",
          "parent": "bone12",
          "length": 44.18,
          "rotation": -85.54,
          "x": 40.68,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone14",
          "parent": "bone13",
          "length": 52.21,
          "rotation": -85.6,
          "x": 44.18,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone15",
          "parent": "bone14",
          "length": 50.39,
          "rotation": -87.4,
          "x": 52.21,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone16",
          "parent": "bone",
          "length": 34.44,
          "rotation": -101.5,
          "x": 140.03,
          "y": 186.41,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone17",
          "parent": "bone16",
          "length": 50.97,
          "rotation": -99.04,
          "x": 34.44,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone18",
          "parent": "bone17",
          "length": 38.18,
          "rotation": -81.38,
          "x": 50.97,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "bone19",
          "parent": "bone18",
          "length": 58.27,
          "rotation": -76.37,
          "x": 38.18,
          "transform": "noRotationOrReflection",
          "color": "ffc300ff"
        },
        {
          "name": "owl_eye_right_1",
          "parent": "bone2",
          "length": 233.25,
          "rotation": 8.25,
          "x": -95.66,
          "y": 143.36,
          "color": "ffc300ff"
        },
        {
          "name": "bone21",
          "parent": "bone2",
          "length": 242.03,
          "rotation": -29.26,
          "x": -121.26,
          "y": -99.47,
          "scaleX": 1.1228,
          "color": "ffc300ff"
        },
        {
          "name": "bone22",
          "parent": "owl_eye_right_1",
          "x": 54.28,
          "y": -23.33,
          "color": "ffc300ff"
        },
        {
          "name": "bone23",
          "parent": "bone21",
          "x": 53.75,
          "y": -14.43,
          "color": "ffc300ff"
        },
        {
          "name": "bone24",
          "parent": "bone2",
          "length": 81.37,
          "rotation": -158.78,
          "x": -84.49,
          "y": 14.22,
          "color": "ffc300ff"
        },
        {
          "name": "bone25",
          "parent": "bone2",
          "rotation": -158.78,
          "x": -216.14,
          "y": -17.55,
          "color": "ff3f00ff"
        },
        {
          "name": "bone26",
          "parent": "bone2",
          "length": 43.04,
          "rotation": -174.77,
          "x": -102.78,
          "y": 41.39,
          "color": "ffc300ff"
        },
        {
          "name": "bone27",
          "parent": "bone2",
          "rotation": -174.77,
          "x": -201.11,
          "y": 29.54,
          "color": "ff3f00ff"
        },
        {
          "name": "owl_crest",
          "parent": "bone",
          "length": 156.47,
          "rotation": -40.36,
          "x": 652.75,
          "y": -3.13,
          "color": "ffc300ff"
        }
      ],
      "slots": [
        { "name": "owl_wing_left", "bone": "bone3", "attachment": "owl_wing_left" },
        { "name": "owl_crest", "bone": "owl_crest", "attachment": "owl_crest" },
        {
          "name": "owl_eye_left_1",
          "bone": "bone21",
          "attachment": "owl_eye_left_1"
        },
        {
          "name": "owl_eye_right_1",
          "bone": "owl_eye_right_1",
          "attachment": "owl_eye_right_1"
        },
        {
          "name": "owl_eye_left_2",
          "bone": "bone23",
          "attachment": "owl_eye_left_2"
        },
        {
          "name": "owl_eye_right_2",
          "bone": "bone22",
          "attachment": "owl_eye_right_2"
        },
        { "name": "owl_body", "bone": "root", "attachment": "owl_body" },
        { "name": "owl_beak_2", "bone": "root", "attachment": "owl_beak_2" },
        { "name": "owl_beak_1", "bone": "root", "attachment": "owl_beak_1" },
        { "name": "owl_finger_3", "bone": "root", "attachment": "owl_finger_3" },
        { "name": "owl_finger_4", "bone": "root", "attachment": "owl_finger_4" },
        { "name": "owl_finger_2", "bone": "root", "attachment": "owl_finger_2" },
        { "name": "owl_finger_1", "bone": "root", "attachment": "owl_finger_1" }
      ],
      "ik": [
        {
          "name": "bone25",
          "bones": ["bone24"],
          "target": "bone25"
        },
        {
          "name": "bone27",
          "order": 1,
          "bones": ["bone26"],
          "target": "bone27"
        }
      ],
      "transform": [
        {
          "name": "11",
          "order": 2,
          "bones": ["owl_crest"],
          "target": "bone2",
          "rotation": -40.36,
          "local": true,
          "x": 323.53,
          "y": 53.22,
          "shearY": 360,
          "translateMix": -1,
          "scaleMix": 0,
          "shearMix": 0
        }
      ],
      "animations": {
        "idle": {
          "bones": {
            "bone": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -2.07 },
                { "time": 2.6667, "angle": 0.0 }
              ],
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": 15.82 },
                { "time": 2.6667 }
              ]
            },
            "bone2": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -0.07 },
                { "time": 2.6667, "angle": 0.0 }
              ],
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": -10.22, "y": 0.49 },
                { "time": 2.6667 }
              ],
              "scale": [
                { "time": 0.0, "x": 1.0, "y": 1.0 },
                { "time": 1.3333, "x": 1.015, "y": 1.015 },
                { "time": 2.6667, "x": 1.0, "y": 1.0 }
              ]
            },
            "bone4": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -12.53 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone5": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -12.91 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone6": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -5.03 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone7": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": 4.49 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone8": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -11.17 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone9": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -8.01 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone10": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -8.08 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone11": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": 1.8 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone12": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -12.45 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone13": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -4.56 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone14": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -2.12 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone15": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": 2.89 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone16": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -11.75 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone17": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -6.69 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone18": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": -4.28 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone19": {
              "rotate": [
                { "time": 0.0, "angle": 0.0 },
                { "time": 1.3333, "angle": 6.87 },
                { "time": 2.6667, "angle": 0.0 }
              ]
            },
            "bone22": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": -6.68, "y": 3.47 },
                { "time": 2.6667 }
              ]
            },
            "bone23": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": -5.49, "y": -3.15 },
                { "time": 2.6667 }
              ]
            },
            "bone24": {
              "rotate": [{ "angle": -7.65, "time": 0.0 }]
            },
            "bone26": {
              "rotate": [{ "angle": 1.64, "time": 0.0 }]
            },
            "owl_crest": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": 1.38, "y": 3.14 },
                { "time": 2.6667 }
              ]
            }
          }
        },
        "idle_offset": {
          "bones": {
            "bone": {
              "rotate": [
                { "angle": -1.04, "time": 0.0 },
                { "time": 0.6667, "angle": 0.0 },
                { "time": 2, "angle": -2.07 },
                { "time": 2.6667, "angle": -1.04 }
              ],
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": 15.82 },
                { "time": 2.6667 }
              ]
            },
            "bone2": {
              "rotate": [
                { "angle": -0.01, "time": 0.0 },
                { "time": 0.3333, "angle": 0.0 },
                { "time": 1.6667, "angle": -0.07 },
                { "time": 2.6667, "angle": -0.01 }
              ],
              "translate": [
                { "x": -5.11, "y": 0.24, "time": 0.0 },
                { "time": 0.6667 },
                { "time": 2, "x": -10.22, "y": 0.49 },
                { "time": 2.6667, "x": -5.11, "y": 0.24 }
              ],
              "scale": [
                { "x": 1.012, "y": 1.012, "time": 0.0 },
                { "time": 1, "x": 1.0, "y": 1.0 },
                { "time": 2.3333, "x": 1.015, "y": 1.015 },
                { "time": 2.6667, "x": 1.012, "y": 1.012 }
              ]
            },
            "bone4": {
              "rotate": [
                { "angle": -0.32, "time": 0.0 },
                { "time": 0.1, "angle": 0.0 },
                { "time": 1.4333, "angle": -12.53 },
                { "time": 2.6667, "angle": -0.32 }
              ]
            },
            "bone5": {
              "rotate": [
                { "angle": -1.07, "time": 0.0 },
                { "time": 0.2, "angle": 0.0 },
                { "time": 1.5333, "angle": -12.91 },
                { "time": 2.6667, "angle": -1.07 }
              ]
            },
            "bone6": {
              "rotate": [
                { "angle": -0.79, "time": 0.0 },
                { "time": 0.3, "angle": 0.0 },
                { "time": 1.6333, "angle": -5.03 },
                { "time": 2.6667, "angle": -0.79 }
              ]
            },
            "bone7": {
              "rotate": [
                { "angle": 1.09, "time": 0.0 },
                { "time": 0.4, "angle": 0.0 },
                { "time": 1.7333, "angle": 4.49 },
                { "time": 2.6667, "angle": 1.09 }
              ]
            },
            "bone8": {
              "rotate": [
                { "angle": -0.29, "time": 0.0 },
                { "time": 0.1, "angle": 0.0 },
                { "time": 1.4333, "angle": -11.17 },
                { "time": 2.6667, "angle": -0.29 }
              ]
            },
            "bone9": {
              "rotate": [
                { "angle": -0.66, "time": 0.0 },
                { "time": 0.2, "angle": 0.0 },
                { "time": 1.5333, "angle": -8.01 },
                { "time": 2.6667, "angle": -0.66 }
              ]
            },
            "bone10": {
              "rotate": [
                { "angle": -1.27, "time": 0.0 },
                { "time": 0.3, "angle": 0.0 },
                { "time": 1.6333, "angle": -8.08 },
                { "time": 2.6667, "angle": -1.27 }
              ]
            },
            "bone11": {
              "rotate": [
                { "angle": 0.43, "time": 0.0 },
                { "time": 0.4, "angle": 0.0 },
                { "time": 1.7333, "angle": 1.8 },
                { "time": 2.6667, "angle": 0.43 }
              ]
            },
            "bone12": {
              "rotate": [
                { "angle": -0.32, "time": 0.0 },
                { "time": 0.1, "angle": 0.0 },
                { "time": 1.4333, "angle": -12.45 },
                { "time": 2.6667, "angle": -0.32 }
              ]
            },
            "bone13": {
              "rotate": [
                { "angle": -0.38, "time": 0.0 },
                { "time": 0.2, "angle": 0.0 },
                { "time": 1.5333, "angle": -4.56 },
                { "time": 2.6667, "angle": -0.38 }
              ]
            },
            "bone14": {
              "rotate": [
                { "angle": -0.33, "time": 0.0 },
                { "time": 0.3, "angle": 0.0 },
                { "time": 1.6333, "angle": -2.12 },
                { "time": 2.6667, "angle": -0.33 }
              ]
            },
            "bone15": {
              "rotate": [
                { "angle": 0.7, "time": 0.0 },
                { "time": 0.4, "angle": 0.0 },
                { "time": 1.7333, "angle": 2.89 },
                { "time": 2.6667, "angle": 0.7 }
              ]
            },
            "bone16": {
              "rotate": [
                { "angle": -0.3, "time": 0.0 },
                { "time": 0.1, "angle": 0.0 },
                { "time": 1.4333, "angle": -11.75 },
                { "time": 2.6667, "angle": -0.3 }
              ]
            },
            "bone17": {
              "rotate": [
                { "angle": -0.56, "time": 0.0 },
                { "time": 0.2, "angle": 0.0 },
                { "time": 1.5333, "angle": -6.69 },
                { "time": 2.6667, "angle": -0.56 }
              ]
            },
            "bone18": {
              "rotate": [
                { "angle": -0.67, "time": 0.0 },
                { "time": 0.3, "angle": 0.0 },
                { "time": 1.6333, "angle": -4.28 },
                { "time": 2.6667, "angle": -0.67 }
              ]
            },
            "bone19": {
              "rotate": [
                { "angle": 1.66, "time": 0.0 },
                { "time": 0.4, "angle": 0.0 },
                { "time": 1.7333, "angle": 6.87 },
                { "time": 2.6667, "angle": 1.66 }
              ]
            },
            "bone22": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": -6.68, "y": 3.47 },
                { "time": 2.6667 }
              ]
            },
            "bone23": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": -5.49, "y": -3.15 },
                { "time": 2.6667 }
              ]
            },
            "bone24": {
              "rotate": [{ "angle": -7.65, "time": 0.0 }]
            },
            "bone25": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": 8.35, "y": -8.64 },
                { "time": 2.6667 }
              ]
            },
            "bone26": {
              "rotate": [{ "angle": 1.64, "time": 0.0 }]
            },
            "owl_crest": {
              "translate": [
                { "time": 0.0 },
                { "time": 1.3333, "x": 1.38, "y": 3.14 },
                { "time": 2.6667 }
              ]
            }
          }
        }
      },
      "skins": {
        "default": {
          "owl_beak_1": {
            "owl_beak_1": {
              "type": "mesh",
              "uvs": [
                0.67924, 0.13613, 0.94039, 0.47385, 1, 0.75109, 1, 1, 0.84938, 1,
                0.75594, 0.85921, 0.61577, 0.71412, 0.27004, 0.57062, 0, 0.38089, 0,
                0, 0.4057, 0, 0.84294, 0.74288, 0.92216, 0.72438, 0.79762, 0.49828,
                0.68977, 0.56705, 0.51047, 0.18411, 0.32634, 0.37393, 0.25572,
                0.06528, 0.10078, 0.19783
              ],
              "triangles": [
                12, 1, 2, 4, 2, 3, 4, 5, 2, 2, 11, 12, 11, 2, 5, 5, 6, 11, 13, 12,
                11, 11, 6, 14, 11, 14, 13, 12, 13, 1, 6, 7, 14, 7, 16, 14, 13, 14,
                15, 14, 16, 15, 13, 0, 1, 15, 0, 13, 7, 8, 16, 8, 18, 16, 8, 9, 18,
                18, 17, 16, 16, 17, 15, 18, 9, 17, 17, 10, 15, 15, 10, 0, 17, 9, 10
              ],
              "vertices": [
                2, 24, 33.33, 48.3, 0.736, 25, -94.77, 61.47, 0.264, 2, 24, 90.87,
                42.26, 0.28, 25, -38.55, 47.81, 0.72, 1, 25, -6.22, 23.74, 1, 1, 25,
                18.03, -2.31, 1, 1, 25, 4.58, -14.83, 1, 1, 25, -17.48, -7.86, 1, 2,
                24, 92.27, -10.15, 0.28, 25, -44.14, -4.32, 0.72, 2, 24, 49.63,
                -29.71, 0.47999, 25, -88.99, -18.03, 0.52001, 2, 24, 7.75, -37.94,
                0.912, 25, -131.6, -20.6, 0.088, 1, 24, -34.33, -3.36, 1, 2, 24,
                -2.9, 34.88, 0.968, 25, -132.46, 52.99, 0.032, 1, 25, -21.04, 11.55,
                1, 1, 25, -15.77, 20.07, 1, 2, 24, 82.51, 26.58, 0.22572, 25,
                -48.92, 33.39, 0.77428, 2, 24, 81.75, 10.17, 0.32, 25, -51.85,
                17.23, 0.68, 2, 24, 25.55, 28.04, 0.59999, 25, -105.17, 42.42,
                0.40001, 2, 24, 32.26, -6.55, 0.37601, 25, -103.12, 7.25, 0.62399,
                2, 24, -7.31, 14.82, 0.88, 25, -139.5, 33.69, 0.12, 2, 24, -4.66,
                -11.82, 0.824, 25, -140.43, 6.94, 0.176
              ],
              "hull": 11,
              "edges": [
                18, 20, 20, 0, 6, 4, 2, 4, 6, 8, 8, 10, 10, 12, 12, 14, 16, 18, 14,
                16, 10, 22, 22, 24, 24, 4, 2, 26, 26, 28, 28, 12, 0, 30, 30, 32, 32,
                14, 20, 34, 34, 36, 36, 16, 0, 2
              ],
              "width": 122,
              "height": 143
            }
          },
          "owl_beak_2": {
            "owl_beak_2": {
              "type": "mesh",
              "uvs": [
                0.90889, 0.17702, 0.96969, 0.45975, 1, 0.74977, 1, 1, 0.82754, 1,
                0.61801, 0.93639, 0.35424, 0.76957, 0.14963, 0.58364, 0, 0.36295, 0,
                0, 0.7881, 0, 0.4595, 0.05051, 0.19675, 0.17111, 0.28644, 0.36399,
                0.65883, 0.17857, 0.45661, 0.62528, 0.81039, 0.46365, 0.6951,
                0.82757, 0.89836, 0.74652
              ],
              "triangles": [
                18, 1, 2, 15, 16, 18, 17, 15, 18, 6, 15, 17, 3, 18, 2, 3, 4, 18, 5,
                17, 4, 4, 17, 18, 5, 6, 17, 6, 7, 15, 7, 13, 15, 15, 13, 16, 13, 14,
                16, 18, 16, 1, 16, 0, 1, 16, 14, 0, 14, 13, 12, 14, 12, 11, 7, 8,
                13, 8, 9, 12, 14, 10, 0, 14, 11, 10, 12, 9, 11, 11, 9, 10, 8, 12, 13
              ],
              "vertices": [
                2, 26, 21.37, 51.94, 0.45551, 27, -79.13, 49.7, 0.54449, 2, 26,
                53.21, 37.7, 0.13408, 27, -46.89, 36.37, 0.86592, 1, 27, -15.24,
                20.34, 1, 1, 27, 10.91, 4.58, 1, 1, 27, 3.25, -8.12, 1, 1, 27,
                -12.69, -19.55, 1, 2, 26, 56.41, -27.26, 0.20427, 27, -41.84,
                -28.47, 0.79573, 2, 26, 27.81, -29.81, 0.44583, 27, -70.35, -31.83,
                0.55417, 2, 26, -1.8, -26.08, 0.744, 27, -100.05, -28.95, 0.256, 1,
                26, -39.05, -2.15, 1, 2, 26, -2.42, 54.88, 0.79201, 27, -102.98,
                51.95, 0.20799, 2, 26, -12.51, 27.77, 0.808, 27, -112.3, 24.57,
                0.192, 2, 26, -12.34, 0.81, 0.77601, 27, -111.36, -2.38, 0.22399, 2,
                26, 11.62, -5.42, 0.504, 27, -87.22, -7.92, 0.496, 2, 26, 9.9,
                33.75, 0.52001, 27, -90.07, 31.18, 0.47999, 2, 26, 46.35, -10.34,
                0.16504, 27, -52.37, -11.84, 0.83496, 2, 26, 46.21, 25.92, 0.09105,
                27, -53.55, 24.39, 0.90895, 1, 27, -20.64, -7.02, 1, 1, 27, -20.09,
                13.06, 1
              ],
              "hull": 11,
              "edges": [
                6, 8, 8, 10, 10, 12, 12, 14, 16, 18, 14, 16, 18, 20, 20, 0, 20, 22,
                22, 24, 24, 16, 14, 26, 26, 28, 28, 0, 12, 30, 30, 32, 32, 2, 0, 2,
                10, 34, 34, 36, 36, 4, 2, 4, 4, 6
              ],
              "width": 86,
              "height": 122
            }
          },
          "owl_body": {
            "owl_body": {
              "type": "mesh",
              "uvs": [
                0.77414, 0.07363, 0.91978, 0.22153, 1, 0.43092, 1, 0.69135, 0.91619,
                0.81508, 0.80097, 0.91187, 0.58716, 1, 0.42289, 1, 0.2373, 0.9332,
                0.07586, 0.75358, 0, 0.55955, 0, 0.38266, 0.03143, 0.24924, 0.20499,
                0.07356, 0.41413, 0, 0.60832, 0, 0.18473, 0.46752, 0.25497, 0.42875,
                0.34905, 0.46692, 0.43146, 0.58973, 0.43023, 0.70511, 0.33594,
                0.75993, 0.26852, 0.75498, 0.15022, 0.56426, 0.58833, 0.566,
                0.60229, 0.31005, 0.66318, 0.26497, 0.74865, 0.26651, 0.80088,
                0.32786, 0.7994, 0.572, 0.74499, 0.61961, 0.643, 0.62056, 0.49269,
                0.47469, 0.21145, 0.38628, 0.5298, 0.61525, 0.42618, 0.8574,
                0.55929, 0.75121, 0.7121, 0.77521, 0.59333, 0.87771, 0.43957,
                0.27465, 0.17214, 0.25751, 0.35821, 0.13928, 0.57086, 0.09644,
                0.10577, 0.55386, 0.14802, 0.71229, 0.23079, 0.81933, 0.76648,
                0.23977, 0.84863, 0.31468, 0.91722, 0.5039, 0.64376, 0.19892,
                0.54786, 0.25434, 0.13915, 0.45517, 0.26709, 0.35874, 0.38322,
                0.37701
              ],
              "triangles": [
                50, 42, 49, 26, 49, 46, 50, 49, 26, 27, 26, 46, 25, 50, 26, 28, 27,
                46, 25, 39, 50, 24, 32, 25, 28, 25, 26, 28, 26, 27, 24, 25, 28, 29,
                24, 28, 29, 28, 48, 30, 24, 29, 31, 24, 30, 31, 34, 24, 37, 31, 30,
                30, 29, 4, 47, 28, 46, 18, 52, 53, 17, 52, 18, 16, 51, 33, 16, 33,
                17, 18, 53, 32, 23, 43, 51, 16, 23, 51, 19, 18, 32, 18, 19, 23, 20,
                19, 34, 18, 16, 17, 19, 21, 22, 18, 23, 16, 19, 22, 23, 44, 23, 22,
                20, 21, 19, 35, 21, 20, 21, 45, 22, 17, 33, 52, 46, 49, 0, 42, 14,
                15, 42, 15, 0, 41, 13, 14, 41, 14, 42, 49, 42, 0, 46, 0, 1, 40, 12,
                13, 40, 13, 41, 47, 46, 1, 47, 1, 2, 40, 11, 12, 51, 11, 40, 48, 47,
                2, 51, 10, 11, 43, 10, 51, 48, 2, 3, 29, 48, 3, 44, 43, 23, 9, 10,
                43, 9, 43, 44, 4, 29, 3, 37, 30, 4, 45, 44, 22, 5, 37, 4, 38, 37, 5,
                35, 8, 45, 9, 44, 45, 8, 9, 45, 7, 35, 38, 45, 21, 35, 7, 8, 35, 6,
                7, 38, 6, 38, 5, 28, 47, 48, 42, 39, 41, 50, 39, 42, 52, 40, 41, 52,
                41, 39, 53, 52, 39, 33, 40, 52, 51, 40, 33, 25, 32, 39, 53, 39, 32,
                34, 19, 32, 24, 34, 32, 31, 36, 34, 20, 34, 36, 36, 31, 37, 35, 20,
                36, 38, 36, 37, 35, 36, 38
              ],
              "vertices": [
                2, 2, 202.03, -328.86, 0.05681, 1, 531.25, -385.21, 0.94319, 2, 2,
                56.45, -380.69, 0.03009, 1, 385.68, -437.04, 0.96991, 2, 2, -108.11,
                -368.36, 0.02456, 1, 221.11, -424.71, 0.97544, 1, 1, 49.31, -341.49,
                1, 2, 2, -333.92, -188.58, 0.06493, 1, -4.69, -244.93, 0.93507, 2,
                2, -359.79, -79.26, 0.10952, 1, -30.56, -135.61, 0.89048, 2, 2,
                -347.46, 94.38, 0.1067, 1, -18.23, 38.03, 0.8933, 2, 2, -293.32,
                206.15, 0.07464, 1, 35.91, 149.8, 0.92536, 2, 2, -188.08, 311.07,
                0.05475, 1, 141.15, 254.72, 0.94525, 2, 2, -16.39, 363.51, 0.08374,
                1, 312.84, 307.16, 0.91626, 2, 2, 136.62, 353.12, 0.07977, 1,
                465.84, 296.77, 0.92023, 2, 2, 253.31, 296.59, 0.0917, 1, 582.53,
                240.24, 0.9083, 2, 2, 330.96, 232.57, 0.08255, 1, 660.19, 176.23,
                0.91745, 2, 2, 389.65, 58.35, 0.15448, 1, 718.87, 2, 0.84552, 2, 2,
                369.25, -107.45, 0.10515, 1, 698.47, -163.8, 0.89485, 2, 2, 305.24,
                -239.58, 0.07945, 1, 634.47, -295.93, 0.92055, 1, 20, 237.54, 20.8,
                1, 1, 20, 231.32, -39.1, 1, 1, 20, 168.28, -82.31, 1, 1, 20, 58.81,
                -83.45, 1, 1, 20, -10.71, -35.28, 1, 1, 20, -4.03, 46.28, 1, 1, 20,
                27.55, 86.46, 1, 1, 20, 193.45, 82.16, 1, 1, 21, 22.74, 76.61, 1, 1,
                21, 190.09, 77.24, 1, 1, 21, 221.91, 33.26, 1, 1, 21, 224.33,
                -31.31, 1, 1, 21, 186.44, -73.41, 1, 1, 21, 27.29, -82.93, 1, 1, 21,
                -5.92, -43.95, 1, 1, 21, -10.63, 32.97, 1, 1, 2, 30.21, -9.21, 1, 3,
                2, 181.23, 153.88, 0.60306, 1, 510.45, 97.53, 0.22094, 20, 275.54,
                -29.3, 0.176, 1, 2, -74.74, 10.46, 1, 2, 2, -200.33, 158.33,
                0.48861, 1, 128.89, 101.98, 0.51139, 2, 2, -174.15, 33.84, 0.85114,
                1, 155.07, -22.51, 0.14886, 2, 2, -240.35, -62.46, 0.59879, 1,
                88.87, -118.81, 0.40121, 2, 2, -268.82, 51.1, 0.46504, 1, 60.41,
                -5.25, 0.53496, 2, 2, 179.68, -36.99, 0.91587, 1, 508.91, -93.34,
                0.08413, 2, 2, 279.13, 139.48, 0.47954, 1, 608.35, 83.13, 0.52046,
                2, 2, 295.8, -24.9, 0.50794, 1, 625.02, -81.25, 0.49206, 2, 2,
                253.97, -183.27, 0.38635, 1, 583.19, -239.62, 0.61365, 2, 2, 105.51,
                279.34, 0.30707, 1, 434.74, 222.99, 0.69293, 2, 2, -12.93, 301.22,
                0.33599, 1, 316.29, 244.87, 0.66401, 2, 2, -110.82, 279.11, 0.3159,
                1, 218.4, 222.76, 0.6841, 3, 2, 94.95, -270.56, 0.1355, 1, 424.17,
                -326.91, 0.2965, 21, 242.46, -43.6, 0.568, 2, 2, 18.46, -302.52,
                0.68995, 1, 347.68, -358.87, 0.31005, 2, 2, -128.97, -288.72,
                0.5258, 1, 200.25, -345.07, 0.4742, 3, 2, 162.34, -200.12, 0.12175,
                1, 491.57, -256.47, 0.25965, 21, 264.17, 50.79, 0.6186, 3, 2,
                157.39, -117.17, 0.43601, 1, 486.61, -173.52, 0.13076, 21, 224.21,
                120.74, 0.43322, 3, 2, 159.61, 225.09, 0.04137, 1, 488.84, 168.74,
                0.24542, 20, 264.36, 44.26, 0.7132, 3, 2, 181.06, 107.23, 0.65208,
                1, 510.28, 50.88, 0.18792, 20, 268.68, -75.45, 0.16, 3, 2, 130.73,
                34.06, 0.44431, 1, 459.95, -22.29, 0.05017, 20, 208.37, -140.65,
                0.50553
              ],
              "hull": 16,
              "edges": [
                22, 24, 24, 26, 26, 28, 28, 30, 30, 0, 0, 2, 2, 4, 4, 6, 6, 8, 8,
                10, 10, 12, 12, 14, 14, 16, 16, 18, 20, 22, 18, 20, 32, 34, 34, 36,
                36, 38, 38, 40, 40, 42, 42, 44, 44, 46, 46, 32, 48, 50, 50, 52, 52,
                54, 54, 56, 56, 58, 58, 60, 60, 62, 48, 62
              ],
              "width": 756,
              "height": 733
            }
          },
          "owl_crest": {
            "owl_crest": {
              "x": 85.09,
              "y": -18.25,
              "rotation": -68.16,
              "width": 212,
              "height": 164
            }
          },
          "owl_eye_left_1": {
            "owl_eye_left_1": {
              "x": 118.29,
              "y": -2.73,
              "rotation": -86.59,
              "width": 161,
              "height": 252
            }
          },
          "owl_eye_left_2": {
            "owl_eye_left_2": {
              "x": 4.29,
              "y": 14.11,
              "rotation": -86.59,
              "width": 123,
              "height": 122
            }
          },
          "owl_eye_right_1": {
            "owl_eye_right_1": {
              "x": 112.28,
              "y": -0.48,
              "rotation": -124.09,
              "width": 195,
              "height": 225
            }
          },
          "owl_eye_right_2": {
            "owl_eye_right_2": {
              "x": 5.29,
              "y": 21.7,
              "rotation": -124.09,
              "width": 128,
              "height": 129
            }
          },
          "owl_finger_1": {
            "owl_finger_1": {
              "type": "mesh",
              "uvs": [
                0.047, 0.07095, 0.47601, 0.0052, 0.52243, 0.0074, 0.57359, 0.01183,
                0.78033, 0.08746, 0.87272, 0.16229, 0.92192, 0.22524, 0.9594,
                0.28385, 0.99999, 0.38789, 1, 0.51515, 0.94739, 0.65527, 0.8913,
                0.73235, 0.6213, 1, 0.4771, 1, 0.36883, 0.77265, 0.33334, 0.7127,
                0.30269, 0.64335, 0.27936, 0.58652, 0.25298, 0.56151, 0.20696,
                0.53466, 0.10524, 0.4839, 0.04969, 0.46232, 0.02593, 0.42721, 0,
                0.28541
              ],
              "triangles": [
                16, 17, 9, 9, 17, 8, 7, 8, 17, 7, 17, 18, 7, 18, 6, 18, 19, 6, 10,
                16, 9, 15, 16, 10, 11, 15, 10, 19, 5, 6, 19, 4, 5, 19, 3, 4, 19, 20,
                3, 20, 21, 2, 2, 21, 22, 20, 2, 3, 2, 22, 1, 1, 23, 0, 1, 22, 23,
                11, 12, 14, 12, 13, 14, 14, 15, 11
              ],
              "vertices": [
                1, 8, -15.8, 17.53, 1, 2, 8, 28.55, 47.41, 0.63823, 9, -16.49, 44.7,
                0.36177, 3, 8, 33.97, 48.94, 0.58756, 9, -12.28, 48.45, 0.4124, 10,
                -71.38, 12.6, 4.0e-5, 3, 8, 40.08, 50.25, 0.53448, 9, -7.36, 52.31,
                0.46495, 10, -69.25, 18.47, 5.7e-4, 3, 8, 68.64, 45, 0.15674, 9,
                20.62, 60.09, 0.76127, 10, -49.62, 39.87, 0.08199, 3, 8, 84.15,
                35.13, 0.03322, 9, 38.88, 58.01, 0.64799, 10, -33.02, 47.78,
                0.31879, 3, 8, 93.96, 25.65, 0.01379, 9, 51.85, 53.78, 0.5453, 10,
                -19.78, 51.05, 0.44091, 3, 8, 102.14, 16.48, 0.00437, 9, 63.23,
                49.12, 0.43717, 10, -7.66, 53.11, 0.55846, 4, 11, -64.25, 33.82,
                2.7e-4, 8, 113.73, -0.89, 1.3e-4, 9, 81.24, 38.58, 0.29102, 10,
                13.21, 53.7, 0.70858, 3, 11, -39.84, 38.31, 0.08756, 9, 99.12,
                21.36, 0.02947, 10, 37.48, 48.54, 0.88297, 2, 11, -11.81, 37,
                0.12817, 10, 62.89, 36.63, 0.87183, 2, 11, 4.2, 33.04, 0.63356, 10,
                76.18, 26.87, 0.36644, 1, 11, 61.44, 10.36, 1, 1, 11, 64.6, -6.8, 1,
                3, 11, 23.37, -27.71, 0.84525, 9, 82.28, -68.48, 9.4e-4, 10, 70.72,
                -36.6, 0.15381, 3, 11, 12.64, -34.05, 0.66559, 9, 70.88, -63.46,
                0.01037, 10, 58.4, -38.37, 0.32403, 4, 11, 0.01, -40.14, 0.48197, 8,
                51.6, -76.66, 3.3e-4, 9, 58.57, -56.74, 0.02901, 10, 44.4, -39.19,
                0.48869, 4, 11, -10.37, -44.93, 0.26241, 8, 45.15, -67.22, 0.01512,
                9, 48.63, -51.09, 0.12763, 10, 32.97, -39.65, 0.59483, 4, 11,
                -14.59, -48.95, 0.12258, 8, 40.47, -63.74, 0.04148, 9, 42.9, -50,
                0.24039, 10, 27.54, -41.76, 0.59555, 4, 11, -18.73, -55.37, 0.05328,
                8, 33.45, -60.73, 0.08364, 9, 35.27, -50.38, 0.35508, 10, 21.26,
                -46.11, 0.508, 4, 11, -26.24, -69.27, 0.00237, 8, 18.49, -55.67,
                0.2618, 9, 19.6, -52.37, 0.4901, 10, 9.02, -56.1, 0.24573, 4, 11,
                -29.16, -76.64, 2.5e-4, 8, 10.73, -54.03, 0.33509, 9, 11.91, -54.29,
                0.47919, 10, 3.5, -61.8, 0.18547, 3, 8, 5.67, -48.58, 0.39996, 9,
                4.98, -51.61, 0.45486, 10, -3.79, -63.19, 0.14518, 3, 8, -6.77,
                -23.69, 0.77201, 9, -17.11, -34.68, 0.19967, 10, -31.49, -60.51,
                0.02832
              ],
              "hull": 24,
              "edges": [
                0, 46, 44, 42, 42, 40, 40, 38, 38, 36, 36, 34, 34, 32, 32, 30, 30,
                28, 28, 26, 24, 26, 24, 22, 22, 20, 18, 16, 16, 14, 14, 12, 12, 10,
                10, 8, 8, 6, 0, 2, 2, 4, 4, 6, 44, 46, 44, 2, 42, 4, 40, 6, 38, 10,
                36, 12, 34, 14, 30, 20, 28, 22, 18, 20, 32, 18
              ],
              "width": 121,
              "height": 195
            }
          },
          "owl_finger_2": {
            "owl_finger_2": {
              "type": "mesh",
              "uvs": [
                0.62385, 0.00901, 0.7158, 0.03523, 0.79919, 0.09591, 0.92581,
                0.25312, 0.96864, 0.31209, 1, 0.38015, 0.98401, 0.63239, 0.95345,
                0.68459, 0.91207, 0.73245, 0.75239, 1, 0.64033, 1, 0.50299, 0.75382,
                0.48687, 0.71633, 0.48047, 0.6713, 0.44639, 0.56753, 0.42053,
                0.50868, 0.37743, 0.46161, 0.3214, 0.41453, 0.2546, 0.38763, 0,
                0.35232, 0, 0
              ],
              "triangles": [
                8, 9, 11, 9, 10, 11, 11, 12, 8, 7, 8, 13, 8, 12, 13, 7, 13, 6, 13,
                14, 6, 6, 14, 5, 14, 15, 5, 15, 4, 5, 15, 3, 4, 3, 16, 2, 2, 16, 1,
                15, 16, 3, 1, 16, 17, 0, 1, 17, 17, 18, 0, 0, 18, 20, 18, 19, 20
              ],
              "vertices": [
                2, 4, 74.33, 40.22, 0.5836, 5, -15.63, 42.87, 0.4164, 2, 4, 87.45,
                35.63, 0.39947, 5, -3.46, 49.59, 0.60053, 3, 4, 99.44, 24.75,
                0.21421, 5, 12.64, 51.28, 0.78121, 6, -54.13, 26.23, 0.00458, 3, 4,
                117.81, -3.61, 0.01095, 5, 46.03, 46.08, 0.74371, 6, -23.2, 39.85,
                0.24533, 3, 4, 124.04, -14.26, 6.8e-4, 5, 58.12, 43.62, 0.56203, 6,
                -11.7, 44.31, 0.43729, 3, 5, 70.39, 38.85, 0.36931, 6, 1.2, 46.91,
                0.63043, 7, -59.49, 31.83, 2.6e-4, 3, 5, 103.54, 7.02, 0.00144, 6,
                46.29, 37.99, 0.74039, 7, -13.68, 35.6, 0.25817, 2, 6, 55.06, 32.31,
                0.56363, 7, -3.69, 32.55, 0.43637, 2, 6, 62.82, 25.23, 0.30676, 7,
                5.71, 27.86, 0.69324, 1, 7, 56.96, 11.77, 1, 1, 7, 59.05, -4, 1, 3,
                5, 75.4, -59.03, 0.00303, 6, 58.22, -32.8, 0.26032, 7, 17.19,
                -29.21, 0.73665, 3, 5, 68.75, -56.28, 0.01062, 6, 51.14, -34.08,
                0.40539, 7, 10.73, -32.38, 0.58399, 4, 4, 55.62, -80.58, 2.6e-4, 5,
                61.97, -51.58, 0.03232, 6, 42.89, -33.78, 0.58589, 7, 2.72, -34.35,
                0.38153, 4, 4, 50.53, -61.76, 0.02018, 5, 44.54, -42.84, 0.20283, 6,
                23.51, -35.82, 0.70869, 7, -15.37, -41.63, 0.06831, 4, 4, 46.71,
                -51.1, 0.0808, 5, 34.05, -38.59, 0.38992, 6, 12.38, -37.9, 0.51653,
                7, -25.5, -46.67, 0.01275, 4, 4, 40.47, -42.62, 0.21925, 5, 23.57,
                -37.58, 0.4924, 6, 3.01, -42.71, 0.2875, 7, -33.19, -53.87, 8.5e-4,
                3, 4, 32.4, -34.16, 0.48603, 5, 11.88, -37.96, 0.40462, 6, -6.63,
                -49.33, 0.10935, 3, 4, 22.85, -29.4, 0.763, 5, 1.96, -41.91,
                0.20377, 6, -12.85, -58, 0.03323, 2, 4, -13.39, -23.47, 0.99956, 5,
                -26.61, -64.98, 4.4e-4, 1, 4, -14.27, 40.65, 1
              ],
              "hull": 21,
              "edges": [
                38, 40, 38, 36, 36, 34, 34, 32, 32, 30, 30, 28, 28, 26, 26, 24, 24,
                22, 22, 20, 18, 20, 18, 16, 16, 14, 14, 12, 10, 8, 8, 6, 6, 4, 4, 2,
                2, 0, 10, 12, 40, 0
              ],
              "width": 142,
              "height": 182
            }
          },
          "owl_finger_3": {
            "owl_finger_3": {
              "type": "mesh",
              "uvs": [
                0.99481, 0.17569, 0.99999, 0.22879, 0.99999, 0.27957, 0.99999,
                0.42915, 0.98913, 0.49183, 0.97203, 0.54771, 0.94657, 0.63311,
                0.92833, 0.68878, 0.90697, 0.74284, 0.65942, 1, 0.47541, 1, 0.17287,
                0.78739, 0.11566, 0.73711, 0.08511, 0.68457, 0.00285, 0.55312,
                1.0e-5, 0.48284, 0.0114, 0.41529, 0.04763, 0.25471, 0.06239,
                0.20423, 0.08861, 0.15279, 0.19532, 1.0e-5, 0.81364, 1.0e-5
              ],
              "triangles": [
                9, 10, 8, 10, 11, 8, 8, 11, 7, 7, 11, 12, 12, 13, 7, 7, 13, 6, 6,
                13, 5, 5, 13, 14, 14, 15, 5, 5, 15, 4, 15, 16, 4, 4, 16, 3, 16, 17,
                3, 3, 17, 2, 17, 18, 2, 18, 1, 2, 18, 19, 1, 19, 0, 1, 0, 19, 21,
                19, 20, 21
              ],
              "vertices": [
                3, 16, 16.81, 48.45, 0.79346, 17, -15.54, 49.17, 0.19091, 18,
                -48.46, 67.02, 0.01563, 3, 16, 26.59, 50.97, 0.67673, 17, -5.66,
                51.26, 0.28977, 18, -38.41, 66.02, 0.0335, 4, 16, 36.04, 52.89,
                0.5333, 17, 3.87, 52.77, 0.40079, 18, -28.87, 64.57, 0.0659, 19,
                -61.16, 70.18, 1.0e-5, 4, 16, 63.9, 58.55, 0.18031, 17, 31.94,
                57.24, 0.52793, 18, -0.77, 60.31, 0.27092, 19, -33.54, 63.49,
                0.02084, 4, 16, 75.78, 59.88, 0.09965, 17, 43.87, 58.05, 0.45947,
                18, 10.84, 57.47, 0.38003, 19, -22.22, 59.63, 0.06085, 4, 16, 86.52,
                60.33, 0.05211, 17, 54.62, 58.04, 0.35679, 18, 21.08, 54.2, 0.46022,
                19, -12.3, 55.49, 0.13088, 4, 16, 102.92, 61.1, 0.01392, 17, 71.04,
                58.1, 0.18809, 18, 36.75, 49.28, 0.47124, 19, 2.88, 49.22, 0.32675,
                4, 16, 113.65, 61.43, 0.00404, 17, 81.77, 57.98, 0.10572, 18, 46.94,
                45.91, 0.38847, 19, 12.73, 44.97, 0.50178, 4, 16, 124.14, 61.41,
                6.3e-4, 17, 92.25, 57.51, 0.05469, 18, 56.78, 42.28, 0.27731, 19,
                22.22, 40.49, 0.66737, 1, 19, 63.93, 5.16, 1, 2, 18, 98.68, -7.28,
                0, 19, 59.63, -12.54, 1, 2, 18, 54.26, -30.84, 0.26397, 19, 13.32,
                -32.13, 0.73603, 3, 17, 103.48, -20.03, 1.6e-4, 18, 43.96, -35.01,
                0.48379, 19, 2.7, -35.38, 0.51605, 3, 17, 94.1, -24.59, 0.00959, 18,
                33.64, -36.51, 0.68415, 19, -7.71, -35.97, 0.30626, 3, 17, 70.71,
                -36.55, 0.2344, 18, 7.73, -40.82, 0.74233, 19, -33.91, -38, 0.02327,
                4, 16, 93.62, -36.42, 2.0e-5, 17, 57.57, -38.93, 0.4778, 18, -5.52,
                -39.1, 0.52075, 19, -46.95, -35.13, 0.00143, 3, 16, 80.82, -37.88,
                0.00697, 17, 44.72, -39.83, 0.73085, 18, -18.04, -36.06, 0.26218, 3,
                16, 50.21, -40.44, 0.22984, 17, 14.02, -41.08, 0.76581, 18, -47.67,
                -27.94, 0.00436, 3, 16, 40.52, -40.92, 0.39736, 17, 4.32, -41.15,
                0.60261, 18, -56.93, -25.06, 2.0e-5, 2, 16, 30.42, -40.33, 0.5983,
                17, -5.74, -40.12, 0.4017, 2, 16, -0.13, -35.76, 0.97393, 17,
                -36.07, -34.25, 0.02607, 2, 16, -12.33, 24.22, 0.99949, 17, -45.69,
                26.21, 5.1e-4
              ],
              "hull": 22,
              "edges": [
                40, 38, 38, 36, 36, 34, 40, 42, 42, 0, 0, 2, 2, 4, 4, 6, 6, 8, 8,
                10, 10, 12, 12, 14, 14, 16, 16, 18, 18, 20, 20, 22, 22, 24, 24, 26,
                26, 28, 28, 30, 30, 32, 32, 34, 22, 16, 24, 14, 26, 12, 10, 28, 8,
                30, 6, 32, 4, 34, 36, 2, 0, 38
              ],
              "width": 99,
              "height": 190
            }
          },
          "owl_finger_4": {
            "owl_finger_4": {
              "type": "mesh",
              "uvs": [
                0.93194, 0.21884, 0.96666, 0.25592, 0.99211, 0.29962, 0.99502,
                0.42964, 0.99803, 0.46465, 1, 0.50195, 0.96692, 0.70721, 0.91179,
                0.75223, 0.82014, 0.80762, 0.60099, 1, 0.44873, 1, 0.28183, 0.8201,
                0.15738, 0.76002, 0.06304, 0.71338, 0.03052, 0.53103, 0.02575,
                0.4899, 0.02983, 0.44317, 0.04951, 0.30463, 0.0601, 0.269, 0.0737,
                0.22975, 0.20876, 0, 0.71902, 0
              ],
              "triangles": [
                8, 9, 11, 9, 10, 11, 8, 11, 7, 7, 11, 12, 7, 12, 6, 6, 12, 13, 13,
                14, 6, 6, 14, 5, 14, 15, 5, 15, 4, 5, 4, 16, 3, 4, 15, 16, 3, 17, 2,
                3, 16, 17, 17, 1, 2, 17, 18, 1, 18, 0, 1, 18, 19, 0, 19, 21, 0, 19,
                20, 21
              ],
              "vertices": [
                3, 12, 35.18, 57.5, 0.44834, 13, -4.26, 57.6, 0.43311, 14, -48.5,
                57.55, 0.11855, 3, 12, 43.06, 60.95, 0.34404, 13, 3.69, 60.88,
                0.48669, 14, -40.55, 60.84, 0.16927, 3, 12, 52.25, 63.28, 0.2547,
                13, 12.93, 63.02, 0.51431, 14, -31.31, 62.99, 0.23099, 4, 12, 79.14,
                62.09, 0.06506, 13, 39.79, 61.25, 0.42003, 14, -4.45, 61.25,
                0.50763, 15, -58.54, 59.44, 0.00729, 4, 12, 86.4, 62.02, 0.03737,
                13, 47.04, 61.03, 0.3553, 14, 2.8, 61.03, 0.59023, 15, -51.29,
                59.45, 0.01709, 4, 12, 94.12, 61.81, 0.01938, 13, 54.76, 60.65,
                0.28463, 14, 10.52, 60.66, 0.66081, 15, -43.57, 59.32, 0.03518, 3,
                13, 96.83, 53.65, 0.02447, 14, 52.6, 53.7, 0.59691, 15, -1.29,
                53.69, 0.37862, 3, 13, 105.64, 46.77, 0.00918, 14, 61.42, 46.83,
                0.47319, 15, 7.74, 47.1, 0.51762, 3, 13, 116.27, 35.65, 4.0e-4, 14,
                72.06, 35.72, 0.21984, 15, 18.73, 36.32, 0.77976, 1, 15, 57.4, 10,
                1, 1, 15, 56.62, -7.04, 1, 3, 13, 114.17, -24.66, 1.2e-4, 14, 70.01,
                -24.59, 0.11738, 15, 18.57, -24.02, 0.8825, 3, 13, 100.69, -37.59,
                0.01581, 14, 56.54, -37.54, 0.45429, 15, 5.51, -37.38, 0.5299, 3,
                13, 90.24, -47.38, 0.05063, 14, 46.11, -47.33, 0.59404, 15, -4.61,
                -47.5, 0.35533, 4, 12, 94.02, -46.94, 2.3e-4, 13, 52.32, -48.07,
                0.44745, 14, 8.19, -48.07, 0.50112, 15, -42.48, -49.42, 0.0512, 4,
                12, 85.49, -46.99, 0.00375, 13, 43.79, -47.95, 0.58809, 14, -0.34,
                -47.95, 0.38236, 15, -51.01, -49.57, 0.02581, 4, 12, 75.86, -45.99,
                0.0199, 13, 34.19, -46.74, 0.72651, 14, -9.95, -46.75, 0.24387, 15,
                -60.66, -48.67, 0.00971, 3, 12, 47.35, -42.18, 0.26979, 13, 5.77,
                -42.31, 0.71571, 14, -38.37, -42.35, 0.01449, 3, 12, 40.05, -40.58,
                0.39907, 13, -1.49, -40.56, 0.59752, 14, -45.63, -40.6, 0.00341, 3,
                12, 32.03, -38.6, 0.55826, 13, -9.48, -38.41, 0.4417, 14, -53.62,
                -38.46, 4.0e-5, 1, 12, -14.6, -20.82, 1, 3, 12, -11.39, 36.23,
                0.97656, 13, -51.28, 37.34, 0.02305, 14, -95.49, 37.25, 3.8e-4
              ],
              "hull": 22,
              "edges": [
                40, 38, 38, 36, 36, 34, 34, 32, 32, 30, 30, 28, 28, 26, 26, 24, 24,
                22, 22, 20, 18, 20, 16, 14, 14, 12, 12, 10, 10, 8, 8, 6, 6, 4, 4, 2,
                2, 0, 40, 42, 0, 42, 32, 6, 30, 8, 28, 10, 34, 4, 36, 2, 38, 0, 26,
                12, 24, 14, 22, 16, 16, 18
              ],
              "width": 112,
              "height": 207
            }
          },
          "owl_wing_left": {
            "owl_wing_left": {
              "x": -18.48,
              "y": 26.11,
              "rotation": -115.85,
              "width": 136,
              "height": 340
            }
          }
        }
      }
    }

    image

    Spine animation tools

    ✨ What’s New

    Look at changelog.

    👋 Welcome

    If you encounter any problems, feel free to open an issue. If you feel the package is missing a feature, please raise a ticket on Github and I’ll look into it. Requests and suggestions are warmly welcome. Danke!

    Contributions are what make the open-source community such a great place to learn, create, take a new skills, and be inspired.

    If this is your first contribution, I’ll leave you with some of the best links I’ve found: they will help you get started or/and become even more efficient.

    The package AnimationTools is open-source, stable and well-tested. Development happens on GitHub. Feel free to report issues or create a pull-request there.

    General questions are best asked on StackOverflow.

    And here is a curated list of how you can help:

    • Documenting the undocumented. Whenever you come across a class, property, or method within our codebase that you’re familiar with and notice it lacks documentation, kindly spare a couple of minutes to jot down some helpful notes for your fellow developers.
    • Refining the code. While I’m aware it’s primarily my responsibility to refactor the code, I wholeheartedly welcome any contributions you’re willing to make in this area. Your insights and improvements are appreciated!
    • Constructive code reviews. Should you discover a more efficient approach to achieve something, I’m all ears. Your suggestions for enhancement are invaluable.
    • Sharing your examples. If you’ve experimented with our use cases or have crafted some examples of your own, feel free to add them to the example directory. Your practical insights can enrich our resource pool.
    • Fix typos/grammar mistakes.
    • Report bugs and scenarios that are difficult to implement.
    • Implement new features by making a pull-request.

    ✅ TODO (perhaps)

    Once you start using the AnimationTools, it will become easy to choose the functionality to contribute. But if you already get everything you need from this package but have some free time, let me write here what I have planned:

    • Support for popular animation formats.

    It’s just a habit of mine: writing down ideas that come to mind while working on a project. I confess that I rarely return to these notes. But now, hopefully, even if you don’t have an idea yet, the above notes will help you choose the suitable “feature” and become a contributor to the open-source community.

    Ready for 🪙

    Created with ❤️

    fresher

    Visit original content creator repository https://github.com/syrokomskyi/animation_tools
  • bhl6-smart-power

    Smart Power – Dokumentacja

    Zespół: Drineczki ( W. Łazarski, J. Radzimiński, J.Szumski, K. Kamieniarz )

    1. Analiza występujących procesów

    Wcałymcyklugrzewczymwyróżniamydwagłówneprocesy:ogrzewaniewodyoraz
    utrzymanie odpowiedniej temperatury. Są to dwa główne źródła, które generują
    zapotrzebowanie na energię elektryczną. Dodatkowo mamy jeszcze pobór energii na
    pozostałe urządzenia, jednak przyjmujemy, że jeston stały dla danych godzin operowania.

    Ogrzewanie wody:
    Zbiornik na wodę pozwala na nagromadzenie 150 litrów ciepłej wody.Średnie zużycie
    dobowewodywynosi 180 litrów.Wtrakciecałegoprocesuzakładamy,żenaszzbiorniknie
    traciciepławody,niepotrzebujemywięcenergiiabyutrzymaćjejtemperaturę.Zakładamy,
    że w momencie włączenia nagrzewania nasz zbiornikwypełniony jest wodą.

    Utrzymanie temperatury w domu:
    Konsumpcjaenergiiprzezsystemogrzewaniawgłównejmierzezależnajestodtemperatury
    panującej na zewnątrz. Determinuje ona tempo utraty ciepłaz domu oraz pobór mocy
    wymagany na podniesienie temperatury o 1 stopień Celsjusza.
    Dodatkowowdomu dostępnyjestrekuperator,który pozwalana wyrównywanieśredniej
    temperatury między pokojami. System ogrzewania dąży do utrzymania temperatur
    zdefiniowanych przez użytkownika.

    Mamy dostępne następujące źródła energii elektrycznej:

    • ogniwa fotowoltaiczne
    • akumulator
    • sieć elektryczną

    2. Wybór i uzasadnienie podejścia

    Zacznijmy od zdefiniowania sobie pewnego okresu czasowego T.

    Następnie T dzielimy na równe okresy z których każdywynosi np. godzinę

    W kolejnym kroku obliczamy wszystkie możliwe kombinacje pracy naszego systemu
    zarządzania energią.

    Gdzie mito tryb działania systemu

    NastępniedlakażdegoodcinkaczasowegowTobliczamykosztwydanejenergiizsieciprzy
    uwzględnionychparametrachwyjściowychzpoprzedniegoodcinkaczasowego(ilośćmocy

    akumulatora, temperatura pomieszczenia) Dodatkowo wykorzystujemy OpenWeather API
    aby prognozować pogodę i zachmurzenie na następnyodcinek czasowy.

    Input:

    • temperatura pomieszczenia osiągnięta w poprzednimodcinku czasowym
    • wartość naładowania akumulatora osiągnięta w poprzednimodcinku czasowym
    • przewidywana temperatur dla nowo rozpatrywanego odcinkaczasowego
      (OpenWeather API)
    • przewidywane zachmurzenia dla nowo rozpatrywanegoodcinka czasowego
      (OpenWeather API)
      Output:
    • nowa temperatura pomieszczenia po odcinku czasowymprzy założeniu trybu mi v
    • nowa wartość mocy akumulatora przy założeniu trybu mi
    • estymowana wartość kosztów energii dla danej iteracji

    Ostatecznie, sumujemy całkowity koszt energii elektrycznejz sieci dla całego ciągu trybów
    działania

    {m1 , m4 , m2 , m1 } → 5
    {m2 , m1 , m2 , m4 } → 7
    {m4 , m1 , m4 , m2 } → 6
    {m1 , m1 , m3 , m3 } → 13
    

    Jako tryb pracy systemu w następnym odcinku czasowymwybieramy pierwszy tryb z ciągu,
    który zwrócił najmniejszy przewidywany koszt.

    3. Sposób wykorzystania poszczególnych informacji i danych

    Wszystkie dostępne informacje są wykorzystane w celu estymacji energii elektrycznej
    którą użytkownik będzie zmuszony pobrać z sieci aby następnie wyliczyć koszty tego
    poboru. Podane informacje tworzą nietrywialną funkcję, któranastępnieminimalizujemy
    algorytmem opisanym wcześnie.

    4. Wybór technologii, wymagania systemowe

    Jako główną architekturą systemu zastosowaliśmy standardową architekturę webową

    • klient-serwer. Klientem jest aplikacja webowa napisana w TypeScript przy użyciu
      frameworkuReact.js.SerwerzostałnapisanywPythoniejakoformieRESTAPI,przyużyciu
      frameworka Flask. Algorytmy użyte w funkcjach optymalizacyjnych również zostały
      napisane w Pythonie jako oddzielne moduły wykorzystywane przez serwer. Całość
      komunikacji odbywa się przy użyciu zapytań HTTP.
      Taka architektura pozwala na uruchomienie aplikacji na każdym urządzeniu
      obsługującym przeglądarki internetowe. Dodatkowo, większość kalkulacji odbywasię na
      oddzielnym serwerze(docelowo umieszczonym w chmurze) przez co mocobliczeniowa
      danego urządzenia nie jest istotna. Serwer w chmurze pozwoliłby również na łatwe
      przyłączanie kolejnych modułów / urządzeń do systemuw przyszłości.

    5. Testowalność rozwiązania

    Testy naszegosystemu polegałyby nastworzeniuscenariuszysymulacjisystemu i
    monitorowaniarozwiązaniapodwzględemestymatykosztówwzględemludzkiegoustalania
    trybówpracy.Oczywiścieszeregscenariuszytestowychpozwoliłbynamnamonitorowanie
    systemu i jego zachowań w różnych sytuacjach.
    Scenariusz to nic innego jak asynchroniczny szereg wywołań różnych zmian
    dziejących sięzarównowdomuijegootoczeniu.Dziękilicznym,,mockowym”modułom
    symulującymurządzeniadomowe(którestworzyliśmyjakoniezależnemodułyPythonowe),
    system możnaprzetestowaćniezależnieodbrakującychwdanymmomencie(fizycznych)
    urządzeń czy elementów.

    6. Zrealizowanie sterowania zarządzaniem energią

    Sterowanietrybemdziałania sterownikazarządzającegoenergiązaimplementowane
    jest po stronie serwera przy użyciu algorytmu opisanegopowyżej.

    7. Biblioteka obsługująca sterowalne komponenty

    W celu testowania odpowiednich komponentów systemu stworzyliśmy moduł
    pythonowydziałającypostronieserwera,któryimitujedziałanieurządzeńwróżnychporach
    roku-abydostosowaćtemperatur-orazdnia-abydostosowaćnasłonecznienie,potrzebnew
    celu wyliczenia efektywności paneli fotowoltaicznych.

    8. Konfigurowalność

    System jest w pełni konfigurowalny oraz skalowalny,wymaga to tylko i wyłącznie
    zaimplementowania i wdrożenia kolejnych implementacji urządzeń. Sama konfiguracja
    wartościdziałaniaurządzeń dladanychodcinkówczasowychmożna zostaćustawionana
    konfigurację domyślną – sparsowane z napisanego apilub ustawićsztywno na wartość
    domyślną.Żadnaztychopcjiniewykluczastworzeniamodułudomanualnegoustawienia
    tych parametrów przez użytkownika.

    9. Niezawodność systemu

    System jest niezawodny w tym sensie, że automatycznie dobiera możliwie najlepszy
    tryb pracy biorąc pod uwagę przewidywania pogodowe dot.przyszłości. Jednocześnie jeśli
    system niekoniecznie dokona prawidłowej estymaty np.poprzezzłą prognozępogody to
    ciągle odpowiedni tryb pracy dostarczy odpowiednie zapotrzebowanie energetyczne. Z
    punktuwidzeniaużytkownikanicsięniezmieni-domsamwinteligentnysposóbbędziesię
    starałdobraćtrybpracysystemuabydostosowaćodpowiedniątemperaturępowietrzaiwody,
    minimalizując przy tym koszty energii elektrycznej.

    Visit original content creator repository
    https://github.com/radziminski/bhl6-smart-power

  • Emotet_Analysis-2

    QUICK & DIRTY MALWARE ANALYSIS

    This repository contains documents relating to a malware analysis conducted on 8/3/2020. The analysis was conducted for the purpose of finding correlations in attack patterns being observed against a specific target to see if any of the attacks are related.

    BACKGROUND

    1. These samples were obtained via fake email accounts.
    2. The email account which sent the sample were not trusted by the recipient.
    3. The email did not get caught by any spam or virus protection.

    NOTES

    1. Most malicious samples have been removed and replaced with their hash values for security reasons.
    2. The original email attachment is included under “\Malicious_Dropper\MALWARE_SAMPLE_8-3-2020.zip.”
    3. The machine used is a Windows 7 Professional SP1 Build 7601 on bare-metal.
    4. The username used was ADMIN.
    5. The hostname used was SANDY.
    6. All indicators of compromise detected were identified as belonging to the Emotet family of Trojan.
    7. Emotet is a versatile trojan initially designed for information theft, remote persistance, ransomware delivery, and botnet management.
    8. Emotet propagates primarily through infected email attachments and phishing campaigns.

    THEORY

    1. I do not believe this campaign is part of an attack aimed at a specific organization.
    2. I believe the attackers are spraying malicious email attachments at known U.S. manufactruring companies.
    3. I believe the attackers will indiscriminately sent malicious emails to any address they find.
    4. The technological complexity of the dropper was slightly Above Average.
    5. The technological complexity of the payload was Average.
    6. The social engineering complexity of this campaign was Negligible.
    7. This campaign primarily relies on human elements to infect a target rather than technical vulnerabilities.

    DEFENSE

    1. Organizational commitment to training.
    2. Due dilligence on an individual level.
    3. Email address whitelisting or email domain blacklisting.

    Visit original content creator repository
    https://github.com/zelon88/Emotet_Analysis-2

  • browserexport

    browserexport

    PyPi version Python 3.9|3.10|3.11|3.12|3.13 PRs Welcome

    This:

    • locates and backs up browser history by copying the underlying database files to some directory you specify
    • can identify and parse the resulting database files into some common schema:
    Visit:
      url: the url
      dt: datetime (when you went to this page)
      metadata:
        title: the <title> for this page
        description: the <meta description> tag from this page
        preview_image: 'main image' for this page, often opengraph/favicon
        duration: how long you were on this page
    

    metadata is dependent on the data available in the browser (e.g. firefox has preview images, chrome has duration, but not vice versa)

    Supported Browsers

    This currently supports:

    This can probably extract visits from other Firefox/Chromium-based browsers, but it doesn’t know how to locate them to save them

    Install

    python3 -m pip install --user browserexport

    Requires python3.9+

    Usage

    save

    Usage: browserexport save [OPTIONS]
    
      Backs up a current browser database file
    
    Options:
      -b, --browser
          [chrome | firefox | opera | safari | brave | waterfox |
          librewolf | floorp | chromium | vivaldi | palemoon | arc |
          edge | edgedev]
                                      Browser name to backup history for
      --pattern TEXT                  Pattern for the resulting timestamped filename, should include an
                                      str.format replacement placeholder for the date [default:
                                      browser_name-{}.extension]
      -p, --profile TEXT              Use to pick the correct profile to back up. If unspecified, will assume a
                                      single profile  [default: *]
      --path FILE                     Specify a direct path to a database to back up
      -t, --to DIRECTORY              Directory to store backup to. Pass '-' to print database to STDOUT
                                      [required]
      -h, --help                      Show this message and exit.
    

    Must specify one of --browser, or --path

    After your browser history reaches a certain size, browsers typically remove old history over time, so I’d recommend backing up your history periodically, like:

    $ browserexport save -b firefox --to ~/data/browsing
    $ browserexport save -b chrome --to ~/data/browsing
    $ browserexport save -b safari --to ~/data/browsing

    That copies the sqlite databases which contains your history --to some backup directory.

    If a browser you want to backup is Firefox/Chrome-like (so this would be able to parse it), but this doesn’t support locating it yet, you can directly back it up with the --path flag:

    $ browserexport save --path ~/.somebrowser/profile/places.sqlite \
      --to ~/data/browsing

    The --pattern argument can be used to change the resulting filename for the browser, e.g. --pattern 'places-{}.sqlite' or --pattern "$(uname)-{}.sqlite". The {} is replaced by the browser name.

    Feel free to create an issue/contribute a browser file to locate the browser if this doesn’t support some browser you use.

    Can pass the --debug flag to show sqlite_backup logs

    $ browserexport --debug save -b firefox --to .
    [D 220202 10:10:22 common:87] Glob /home/username/.mozilla/firefox with */places.sqlite (non recursive) matched [PosixPath('/home/username/.mozilla/firefox/ew9cqpqe.dev-edition-default/places.sqlite')]
    [I 220202 10:10:22 save:18] backing up /home/username/.mozilla/firefox/ew9cqpqe.dev-edition-default/places.sqlite to /home/username/Repos/browserexport/firefox-20220202181022.sqlite
    [D 220202 10:10:22 core:110] Source database files: '['/tmp/tmpcn6gpj1v/places.sqlite', '/tmp/tmpcn6gpj1v/places.sqlite-wal']'
    [D 220202 10:10:22 core:111] Temporary Destination database files: '['/tmp/tmpcn6gpj1v/places.sqlite', '/tmp/tmpcn6gpj1v/places.sqlite-wal']'
    [D 220202 10:10:22 core:64] Copied from '/home/username/.mozilla/firefox/ew9cqpqe.dev-edition-default/places.sqlite' to '/tmp/tmpcn6gpj1v/places.sqlite' successfully; copied without file changing: True
    [D 220202 10:10:22 core:64] Copied from '/home/username/.mozilla/firefox/ew9cqpqe.dev-edition-default/places.sqlite-wal' to '/tmp/tmpcn6gpj1v/places.sqlite-wal' successfully; copied without file changing: True
    [D 220202 10:10:22 core:230] Running backup, from '/tmp/tmpcn6gpj1v/places.sqlite' to '/home/username/Repos/browserexport/firefox-20220202181022.sqlite'
    [D 220202 10:10:22 save:14] Copied 1840 of 1840 database pages...
    [D 220202 10:10:22 core:246] Executing 'wal_checkpoint(TRUNCATE)' on destination '/home/username/Repos/browserexport/firefox-20220202181022.sqlite'
    

    For Firefox Android Fenix, the database has to be manually backed up (probably from a rooted phone using termux) from data/data/org.mozilla.fenix/files/places.sqlite.

    inspect/merge

    These work very similarly, inspect is for a single database, merge is for multiple databases.

    Usage: browserexport merge [OPTIONS] SQLITE_DB...
    
      Extracts visits from multiple sqlite databases
    
      Provide multiple sqlite databases as positional arguments, e.g.:
      browserexport merge ~/data/firefox/*.sqlite
    
      Drops you into a REPL to access the data
    
      Pass '-' to read from STDIN
    
    Options:
      -s, --stream  Stream JSON objects instead of printing a JSON list
      -j, --json    Print result to STDOUT as JSON
      -h, --help    Show this message and exit.
    

    As an example:

    browserexport --debug merge ~/data/firefox/* ~/data/chrome/*
    [D 210417 21:12:18 merge:38] merging information from 24 sources...
    [D 210417 21:12:18 parse:19] Reading visits from /home/username/data/firefox/places-20200828223058.sqlite...
    [D 210417 21:12:18 common:40] Chrome: Running detector query 'SELECT * FROM keyword_search_terms'
    [D 210417 21:12:18 common:40] Firefox: Running detector query 'SELECT * FROM moz_meta'
    [D 210417 21:12:18 parse:22] Detected as Firefox
    [D 210417 21:12:19 parse:19] Reading visits from /home/username/data/firefox/places-20201010031025.sqlite...
    [D 210417 21:12:19 common:40] Chrome: Running detector query 'SELECT * FROM keyword_search_terms'
    ....
    [D 210417 21:12:48 common:40] Firefox: Running detector query 'SELECT * FROM moz_meta'
    [D 210417 21:12:48 common:40] Safari: Running detector query 'SELECT * FROM history_tombstones'
    [D 210417 21:12:48 parse:22] Detected as Safari
    [D 210417 21:12:48 merge:51] Summary: removed 3001879 duplicates...
    [D 210417 21:12:48 merge:52] Summary: returning 334490 visit entries...
    
    Use vis to interact with the data
    
    [1] ...
    

    You can also read from STDIN, so this can be used in conjunction with save, to merge databases you’ve backed up and combine your current browser history:

    browserexport save -b firefox -t - | browserexport merge --json --stream - ~/data/browsing/* >all.jsonl

    Or, use process substitution to save multiple dbs in parallel and then merge them:

    $ browserexport merge <(browserexport save -b firefox -t -) <(browserexport save -b chrome -t -)

    Logs are hidden by default. To show the debug logs set export BROWSEREXPORT_LOGS=10 (uses logging levels) or pass the --debug flag.

    JSON

    To dump all that info to JSON:

    $ browserexport merge --json ~/data/browsing/*.sqlite > ./history.json
    du -h history.json
    67M     history.json

    Or, to create a quick searchable interface, using jq and fzf:

    browserexport merge -j --stream ~/data/browsing/*.sqlite | jq '"\(.url)|\(.metadata.description)"' | awk '!seen[$0]++' | fzf

    Merged files like history.json can also be used as inputs files themselves, this reads those by mapping the JSON onto the Visit schema directly.

    In addition to .json files, this can parse .jsonl (JSON lines) files, which are files which contain newline delimited JSON objects. This allows you to parse JSON objects one at a time, instead of loading the entire file into memory. The .jsonl file can be generated with the --stream flag:

    browserexport merge --stream --json ~/data/browsing/*.sqlite > ./history.jsonl
    

    Additionally, this can parse compressed JSON/JSONL files (using kompress): .xz, .zip, .lz4, .zstd, .zst, .tar.gz, .gz

    For example, you could do:

    browserexport merge --stream --json ~/data/browsing/*.sqlite | gzip --best > ./history.jsonl.gz
    # test parsing the compressed file
    browserexport --debug inspect ./history.jsonl.gz

    If you don’t care about keeping the raw databases for any other auxiliary info like form, bookmark data, or from_visit info and just want the URL, visit date and metadata, you could use merge to periodically merge the bulky .sqlite files into a gzipped JSONL dump to reduce storage space, and improve parsing speed:

    # backup databases
    rsync -Pavh ~/data/browsing ~/.cache/browsing
    # merge all sqlite databases into a single compressed, jsonl file
    browserexport --debug merge --json --stream ~/data/browsing/* > '/tmp/browsing.jsonl'
    gzip '/tmp/browsing.jsonl'
    # test reading gzipped file
    browserexport --debug inspect '/tmp/browsing.jsonl.gz'
    # remove all old datafiles
    rm ~/data/browsing/*
    # move merged data to database directory
    mv /tmp/browsing.jsonl.gz ~/data/browsing

    I do this every couple months with a script here, and then sync my old databases to a harddrive for more long-term storage

    Shell Completion

    This uses click, which supports shell completion for bash, zsh and fish. To generate the completion on startup, put one of the following in your shell init file (.bashrc/.zshrc etc)

    eval "$(_BROWSEREXPORT_COMPLETE=bash_source browserexport)" # bash
    eval "$(_BROWSEREXPORT_COMPLETE=zsh_source browserexport)" # zsh
    _BROWSEREXPORT_COMPLETE=fish_source browserexport | source  # fish

    Instead of evaling, you could of course save the generated completion to a file and/or lazy load it in your shell config, see bash completion docs, zsh functions, fish completion docs. For example for zsh that might look like:

    mkdir -p ~/.config/zsh/functions/
    _BROWSEREXPORT_COMPLETE=zsh_source browserexport > ~/.config/zsh/functions/_browserexport
    # in your ~/.zshrc
    # update fpath to include the directory you saved the completion file to
    fpath=(~/.config/zsh/functions $fpath)
    autoload -Uz compinit && compinit

    HPI

    If you want to cache the merged results, this has a module in HPI which handles locating/caching and querying the results. See setup and module setup.

    That uses cachew to automatically cache the merged results, recomputing whenever you backup new databases

    As a few examples:

    ✅ OK  : my.browser.all
    ✅     - stats: {'history': {'count': 1091091, 'last': datetime.datetime(2023, 2, 11, 1, 12, 37, 302883, tzinfo=datetime.timezone.utc)}}
    ✅ OK  : my.browser.export
    ✅     - stats: {'history': {'count': 1090850, 'last': datetime.datetime(2023, 2, 11, 4, 34, 12, 985488, tzinfo=datetime.timezone.utc)}}
    ✅ OK  : my.browser.active_browser
    ✅     - stats: {'history': {'count': 270363, 'last': datetime.datetime(2023, 2, 11, 22, 26, 24, 887722, tzinfo=datetime.timezone.utc)}}
    # supports arbitrary queries, e.g. how many visits did I have in January 2020?
    $ hpi query my.browser.all --order-type datetime --after '2022-01-01 00:00:00' --before '2022-01-31 23:59:59' | jq length
    50432
    # how many github URLs in the past month
    $ hpi query my.browser.all --recent 4w -s | jq .url | grep 'github.com' -c
    16357

    Library Usage

    To save databases:

    from browserexport.save import backup_history
    backup_history("firefox", "~/data/backups")
    # or, pass a Browser implementation
    from browserexport.browsers.all import Firefox
    backup_history(Firefox, "~/data/backups")

    To merge/read visits from databases:

    from browserexport.merge import read_and_merge
    read_and_merge(["/path/to/database", "/path/to/second/database", "..."])

    You can also use sqlite_backup to copy your current browser history into a sqlite connection in memory, as a sqlite3.Connection

    from browserexport.browsers.all import Firefox
    from browserexport.parse import read_visits
    from sqlite_backup import sqlite_backup
    
    db_in_memory = sqlite_backup(Firefox.locate_database())
    visits = list(read_visits(db_in_memory))
    
    # to merge those with other saved files
    from browserexport.merge import merge_visits, read_and_merge
    merged = list(merge_visits([
        visits,
        read_and_merge(["/path/to/another/database.sqlite", "..."]),
    ]))

    If this doesn’t support a browser and you wish to quickly extend without maintaining a fork (or contributing back to this repo), you can pass a Browser implementation (see browsers/all.py and browsers/common.py for more info) to browserexport.parse.read_visits or programmatically override/add your own browsers as part of the browserexport.browsers namespace package

    Comparisons with Promnesia

    A lot of the initial queries/ideas here were taken from promnesia and the browser_history.py script, but creating a package here allows its to be more extendible, e.g. allowing you to override/locate additional databases.

    TLDR on promnesia: lets you explore your browsing history in context: where you encountered it, in chat, on Twitter, on Reddit, or just in one of the text files on your computer. This is unlike most modern browsers, where you can only see when you visited the link.

    browserexport is now used in promnesia in the browser source, see setup and the browser source quickstart in the instructions for more

    Contributing

    Clone the repository and [optionally] create a virtual environment to do your work in.

    git clone https://github.com/purarue/browserexport
    cd ./browserexport
    # create a virtual environment to prevent possible package dependency conflicts
    python -m virtualenv .venv  # python3 -m pip install virtualenv if missing
    source .venv/bin/activate

    Development

    To install, run:

    python3 -m pip install '.[testing]'

    If running in a virtual environment, pip will automatically install dependencies into your virtual environment. If running browserexport happens to use the globally installed browserexport instead, you can use python3 -m browserexport to ensure its using the version in your virtual environment.

    After making changes to the code, reinstall by running pip install ., and then test with browserexport or python3 -m browserexport

    Testing

    While developing, you can run tests with:

    pytest
    flake8 ./browserexport
    mypy ./browserexport
    # to autoformat code
    python3 -m pip install black
    find browserexport tests -name '*.py' -exec python3 -m black {} +
    Visit original content creator repository https://github.com/purarue/browserexport
  • confidence

    Build
    codecov
    Confidence

    Confidence

    A declarative Java Assertion Framework.

    Confidence makes it easier to write Java Unit tests that give you great confidence in your code with little effort.

    Note

    Confidence is still under development. All parts should be considered subject to change.

    Declarative Testing

    Declarative testing means focusing on the What instead of the How.

    Any unit under test (typically a class) has two aspects:

    • What it is meant to do and
    • How you have to use it.

    The How is, to a large extend, determined by the interface of a class or the signature of a function. In case of mutable classes and non-pure functions the order interactions may also be relevant. In any case though, the How is typically very static and, to some extent, also enforced by the compiler. That means we often can use the same methods for testing various implementations of the same type, we just need to provide different data and assert different behavior. That’s the What. A declarative test leaves the How to the test framework and only describes the What.

    Example

    The classic non-declarative test of a Predicate might look like this:

    assertTrue(new IsEven().test(2));
    assertFalse(new IsEven().test(3));

    It contains interface details like the fact that you call the test method and that it returns true in case the argument satisfies the Predicate.

    The declarative test might look like this

    assertThat(new IsEven(),
        is(allOf(
            satsifiedBy(2),
            not(satisfiedBy(3))
        )));

    In this case we don’t see how the instance is tested, we just describe what we expect, namely that 2 satisfies the Predicate and 3 doesn’t. All the method calls and result evaluation are performed by the satisfiedBy Quality, which can be used for every Predicate implementation

    Qualities

    In Confidence, you use Qualitys to express what you expect of the unit under test. As seen above, Qualitys are composable to express even complex behavior. Confidence already provides many Quality implementations, but to use its full power you should write custom Qualitys for your own types.

    Writing custom Quality implementations

    Confidence already comes with a number of useful Qualitys that cover many JDK types. Yet, it is important to be able to write custom implementations. Ideally you provide a library with Qualitiys for all types you declare in your own code. That makes it easier for you and others (for instance users of your library) to write tests.

    Composing Qualities

    In many cases you can write a new Quality by composing it from already existing ones. In fact, many of the Qualitys in the confidence-core module are just compositions of simpler Qualitys.

    Example

    This is the implementation of the EmptyCharSequence Quality, that describes CharSequences and String with a length of 0.

    @StaticFactories(value = "Core", packageName = "org.saynotobugs.confidence.quality")
    public final class EmptyCharSequence extends QualityComposition<CharSequence>
    {
        public EmptyCharSequence()
        {
            super(new Satisfies<>(c -> c.length() == 0, new Text("<empty>")));
        }
    }

    This creates a new Quality composition based on an existing Satisfies Quality. Satisfies takes a Predicate that must be satisfied for the Quality to be satisfied and a Description of the expectation. By default, the fail Description is the actual value, but Satisfies takes an optional argument to create a more adequate fail Description for a given actual value.

    The annotation

    @StaticFactories(value = "Core", packageName = "org.saynotobugs.confidence.quality")
    

    ensures a static factory methods like the following is automatically created in a class called Core:

    public static EmptyCharSequence emptyCharSequence() {
        return new org.saynotobugs.confidence.quality.charsequence.EmptyCharSequence();
    }

    Discoverability of Qualities

    When it comes to writing tests, finding the right Quality can often feel like searching for a needle in a haystack. While some frameworks rely on fluent APIs to ease this process, Confidence takes a different approach.

    Instead of a fluent API, Confidence organizes its static factory methods into classes named after the types they describe. This convention simplifies the process of discovering Qualitys, as your IDE may suggest available options simply by typing out the type you’re testing.

    For example, if you’re working with an instance of Iterable (e.g. an ArrayList), you’ll find suitable Qualitys in the org.saynotobugs.confidence.core.quality.Iterable class. While this may differ from the exact naming of the type you’re testing, it ensures a logical organization that aids in discovery.

    However, there are cases where a Quality doesn’t directly correlate to a specific type or serves as an adapter. Currently, Confidence addresses four such scenarios:

    • Compositions: Qualitys like allOf, not, or has are grouped under the Composite class.
    • Grammar Improvements: Qualitys that enhance grammar, such as is, to, and soIt, reside in the Grammar class.
    • Framework Adapters: Adapters to other frameworks, such as the Hamcrest adapter qualifiesAs, are found in the Adapter class.
    • Non-Java Types: Qualitys describing non-Java concepts may reside in a dedicated class, e.g. JSON qualities are housed in the Json class.

    This organization ensures that regardless of the type or scenario you’re testing, Confidence provides a structured and intuitive approach to discovering and utilizing its Qualitys.

    Testing Qualities

    Classic non-declarative tests often times have a major flaw: the (often times very imperative) test code is not tested itself. After all, you only can trust your production code, when you can trust the test code too.

    The functional ideas Confidence is built upon, makes it easy to test Qualitys and ensure the how has full test coverage.

    Confidence makes it easy to test a Quality. Just describe the expected behavior when you provide instances that are expected to pass and some that are expected to fail the assertion of the Quality under test:

    assertThat(new EmptyCharSequence(),    // The Quality under test.
        new AllOf<>(
            new Passes<>(""),              // An example that should pass the test.
            new Fails<>(" ", "\" \""),     // Examples that should fail the test …
            new Fails<>("123", "\"123\""), // … along with the resulting description.
            new HasDescription("<empty>")  // The description of the Quality.
        ));
    }

    Switching from Hamcrest

    As a Hamcrest user you’ll find it easy to switch to Confidence. The core idea is the same: Composable components to describe he expected behavior of your code. In Hamcrest these are called Matcher, in Confidence they are called Quality.

    There are some significant differences though:

    • In case of a mismatch, Hamcrest (for Java) needs to run the Matcher again to get a mismatch description, a Confidence Quality returns an Assessment that contains the result and a description of the issue (in case the assessment failed).
    • Confidence makes it easier to produce comprehensible descriptions, closer to what Assertj or Google Truth produce, by using composable Descriptions
    • In Confidence the Contains Quality has the same semantics as Java Collection.contains(Object)
    • Confidence has out ouf the box support for testing Quality implementations.

    There are also some noticeable differences in how some of the core Quality implementations are being called or used. The following table shows the most important ones.

    General note on matching arrays: arrays (including ones of primitive types) can be matched with matchers to match Iterables decorated with arrayThat(…).

    Hamcrest Confidence
    contains(...) iterates(...)
    containsInAnyOrder(...) iteratesInAnyOrder(...)
    iterableWithSize(...) hasNumberOfElements(...)
    hasItem(...) contains(...)
    hasItems(...) containsAllOf(...)
    everyItem(...) eachElement(...)
    sameInstance(...), theInstance(...) sameAs(...)
    matchesRegex(...), matchesPattern(...) matchesPattern(...)
    array(...) arrayThat(iterates(...))*
    hasItemInArray(...) arrayThat(contains(...))*
    arrayWithSize(...) arrayThat(hasNumberOfElements(...))*

    *works with arrays of primitive types

    confidence-hamcrest

    Confidence provides adapters to use Hamcrest Matchers in Confidence assertions and Confidence Qualitys where Hamcrest Matchers are required (for instance when working with rest-assured, mockito or awaitlity).

    You can use Hamcrest Matchers with Confidence by including the confidence-hamcrest artifact and adapting it with the matches adapter Quality.

    assertThat(List.of(1,2,5,10,11), matches(hasItem(2)));

    The same module also provides a Hamcrest Matcher called qualifiesAs to use Confidence Qualitys in a test that requires a Matcher:

    response.then().body("id", qualifiesAs(jsonStringOf(object(with("foo", equalTo("bar"))))))

    JUnit Confidence TestEngine

    One of the goals of Confidence is to eliminate any imperative code from unit tests. Unfortunately, with Jupiter you still need to write at least one very imperative assertThat statement.

    That’s why the confidence-incubator module contains an experimental JUnit TestEngine to remove this limitation.

    With the ConfidenceEngine you no longer write statements. Instead, you declare Assertions that are verified when the test runs.

    Check out the HasPatchTest from the dmfs/semver project. It verifies that the HasPatch Quality is satisfied by certain Versions (at present the naming has diverged a bit).

    @Confidence
    class HasPatchTest
    {
        Assertion has_patch_int = assertionThat(
            new HasPatch(5),
            allOf(
                passes(mock(Version.class, with(Version::patch, returning(5)))),
                fails(mock(Version.class, with(Version::patch, returning(4))), "had patch <4>"),
                hasDescription("has patch <5>")
            )
        );
    
        Assertion has_patch_quality = assertionThat(
            new HasPatch(greaterThan(4)),
            allOf(
                passes(mock(Version.class, with(Version::patch, returning(5)))),
                fails(mock(Version.class, with(Version::patch, returning(4))), "had patch <4>"),
                hasDescription("has patch greater than <4>")
            )
        );
    }

    The class is annotated with @Confidence to make it discoverable by the ConfidenceEngine.

    There are no statements in that test, not even test methods. The test only declares certain Assertions that are verified by the test engine.

    Also, there are no Before or After hooks. The idea is to make those part of the Assertion using composition. For instance, when a test requires certain resources you’d apply the withResources decorator like in the following test, that requires a git repository in a temporary directory:

        Assertion default_strategy_on_clean_repo = withResources(
            new TempDir(),
            new Repository(
                getClass().getClassLoader().getResource("0.1.0-alpha.bundle"),
               "main"),
    
            (tempDir, repo) -> assertionThat(
                new GitVersion(TEST_STRATEGY, new Suffixes(), ignored -> "alpha"),
                maps(repo, to(preRelease(0, 1, 0, "alpha.20220116T191427Z-SNAPSHOT")))));

    The withResources decorator creates the required resources before the assertion is made and cleans up afterward.

    The Confidence Engine is still in an early ideation phase. You’re welcome to try it and make suggestions or contributions for improvements.

    Badge

    Show visitors of your repository that you use Confidence to test your projects by embedding this badge

    Confidence

    Put the following markdown snippet into your README.md file.

    [![Confidence](http://askcreate.top/wp-content/uploads/2025/08/Tested_with-Confidence-800000)](https://saynotobugs.org/confidence)

    Note that the link to https://saynotobugs.org/confidence currently just redirects to https://github.com/saynotobugsorg/confidence this will change in the near future.

    Visit original content creator repository https://github.com/saynotobugsorg/confidence
  • ContextCollector

    COCO Context Collector – Multimodal Learning

    PyTorch OpenCV CMake nVIDIA

    It’s a Contextualizer, trained on COCO! See what I did there?

    This mixed vision-language model gets better by making mistakes

    p1

    Trained on COCO (50 GB, 2017 challenge)

    git clone https://github.com/AndreiMoraru123/ContextCollector.git
    cd ContextCollector
    chmod +x make
    ./make

    Via the Python API

    pip install pycocotools

    Click here to see some more examples

    p2

    Based on the original paper: Show, Attend and Tell

    Frame goes in, caption comes out.

    Note

    Make sure to check the original implementation first, because this is the model that I am using.

    p3

    Motivation

    The functional purpose of this project could be summed up as Instance Captioning, as in not trying to caption the whole frame, but only part of it. This approach is not only going to be faster (because the model is not attempting to encode the information of the whole image), but it can also prove more reliable for video inference, through a very simple mechanism I will call “expansion”.

    The deeper motivation for working on this is, however, more profound.

    For decades, language and vision were treated as completely different problems and naturally, the paths of engineering that have emerged to provide solutions for them were divergent to begin with.

    Neural networks, while perhaps the truce between the two, as their application in deep learning considerably improved both language and vision, still today rely mostly on different techniques for each task, as if language and vision would be disconnected from one another.

    The latest show in town, the Transformer architecture, has provided a great advancement into the world of language models, following the original paper Attention is All You Need that paved the way to models like GPT-3, and while the success has not been completely transferred to vision, some breakthroughs have been made: An Image is Worth 16×16 Words, SegFormer, DINO.

    One of the very newest (time of writing: fall 2022) is Google’s LM-Nav, a Large Vision + Language model used for robotic navigation. What is thought provoking about this project is the ability of a combined V+L model to “understand” the world better than a V or L model would do on their own. Perhaps human intelligence itself is the sum of smaller combined intelligent models. The robot is presented with conflicting scenarios and is able to even “tell” if a prompt makes sense as a navigational instruction or is impossible to fulfil.

    p4

    Vocabulary and Data

    As the official dataset homepage states, “COCO is a large-scale object detection, segmentation, and captioning dataset”.

    For this particular model, I am concerned with detection and captioning.

    Before the CocoDataset can be created in the cocodata.py file, a vocabulary instance of the Vocabulary class has to be constructed using the vocabulary.py file. This can be conveniently done using the tokenize function of of nltk module.

    The Vocabulary is simply the collection of words that the model needs to learn. It also needs to convert said words into numbers, as the decoder can only process them as such. To be able to read the output of the model, they also need to be converted back. These two are done using two hash maps (dicts), word2idx and idx2word.

    As per all sequence to sequence models, the vocab has to have a known <start> token, as well as an <end> one. An <unk> token for the unknown words, yet to be added to the file acts as a selector for what gets in.

    The vocabulary is, of course, built on the COCO annotations available for the images.

    The important thing to know here is that each vocabulary generation can (and should) be customized. The instance will not simply add all the words that it can find in the annotations file, because a lot would be redundant.

    For this reason, two vocabulary hyper-parameters can be tuned:

    word_threshold = 6  # minimum word count threshold (if a word occurs less than 6 times, it is discarded)
    vocab_from_file = False  # if True, load existing vocab file. If False, create vocab file from scratch

    and, because the inference depends on the built vocabulary, the word_treshold can be set only while in training mode, and the vocab_from_file trigger can only be set to True while in testing mode.

    Building the vocabulary will generate the vocab.pkl pickle file, which can then be later loaded for inference.

    p5

    Model description

    $$I \to \text{Input ROI (region of interest)}$$ $$S = \{ S_0, S_1, …, S_n \} \to \text{Target sequence of words}, \: S_i \in \mathbb{R}^{K} \\$$$$ $$\text{Where} \: K = \text{the size of the dictionary}$$ $$p(S | I) \to \text{likelihood}$$ $$\text{The goal is to tweak the params in order to max the probability of a generated sequence being correct given a frame}$$ $$\theta^{*} = \arg \max_{\theta} \log p(S|I; \theta)$$ $$\log p(S|I) = \sum_{i=1}^{n} \underbrace{\log p(S_i|S_{1},\dots,S_{i-1},I)}_{\text{modeled with an RNN}}$$

    Then the forward feed is as follows:

    1. The image is first (and only once) encoded into the annotation vectors
    $$x_{-1} = \text{CNN}(I)$$
    1. The context vectors are calculated from both the encoder output, and the hidden state (initially a mean of the encoder output), using Bahdanau alignments.
    $$x_t = \text{WeSt}, t \in \{0, \dots, N-1\} \to \text{ this is a joint embedding representation of the context vector}$$
    1. The model outputs the probability for the next word, given the current word (the first being the <start> token). It keeps on going until it reaches the <end> token.
    $$p_{t+1} = \text{LSTM}(x_t), t \in \{0, \dots, N-1\}$$

    The attention itself is the alignment between the encoder’s output (vision) and the decoder hidden state (language):

    $$e_t = f_{\text{att}}(a, h_{t-1}) \quad\text{(a miniature neural network with a non-linear activation of two linear combinations)}$$ $$h_{t-1} = \text{hidden state} \quad\text{ and} \quad a = \text{annotation vectors}$$ $$a = {a_1, a_2, …, a_L} \in \mathbb{R}^D \quad (D = 2048, L = 28 \times 28)$$ $$\text{In this equation, \$a\$ represents the output feature map of the encoder, which is a collection of \$L\$ activations}$$ $$\text{Each activation \$a_i\$ corresponds to a pixel in the input image, and is a vector of dimension \$D=2048\$}$$ $$\text{obtained by projecting the pixel features into a high-dimensional space.}$$ $$\text{Collectively, the feature map \$a\$ captures information about the contents of the input image}$$ $$\alpha_{t,i} = \frac{\exp(e_t)}{\sum_k \exp(e_{t,k})} \quad\text{(probability of each pixel worth being attended to)}$$ $$\quad\text{(results in the instance segmentation-like effect seen in the paper)}$$ $$awe = f_i({a_i}, {\alpha_i}) = \beta \sum_i [a_i, \alpha_i] \quad\text{(attention weighted encoding)}$$ $$\quad\text{(element-wise multiplication of each pixel and its probability)}$$ $$\quad\text{(achieves a weighted sum vector when added up across the pixels’ dimensionality)}$$ $$\beta = \sigma(f_b(h_{t-1})) \quad\text{(gating scalar used in the paper to achieve better results)}$$

    The expansion mechanism builts upon detection in the following way:

    $$\text{If } \forall S_i \neq \text{label} \text{ for any } i \in \{1, \dots, n\}, \text{ then } I = I + \phi \cdot I, \text{ where } 0 \leq \phi \leq 1 \text{ and } I \leq I + \phi \cdot I \leq I_{\max}$$

    Which means any time none of the output words match the prediction of the detector, the ROI in which the model looks is resized, therefore allowing the model to “collect more context”. In this case, label is the category prediction of YOLO.

    As found in model.py

    Encoder

    The encoder is a beheaded pretrained ResNet-152 model that outputs a feature vector of size 2048 x W x H for each image, where W and H are both the encoded_image_size used in the last average pooling. The original paper proposed an encoded size of 14.

    Since ResNet was originally designed as a classifier, the last layer is going to be the activation function Softmax.

    However, since PyTorch deals with probabilities implicitly using CrossEntropyLoss, the classifier will not be present, and the only layers that need to be beheaded are the last linear fully connected layer and the average pooling layer, which will be replaced by the custom average pooling layer, for which you and I can choose the pooling size.

    The freeze_grad function is there if you need to tailor how many (if any) of the encoder layers do you want to train (optional, since the Net is pretrained).

    The purpose of the resulting feature map is to provide a latent space representation of each frame, from which the decoder can draw multiple conclusions.

    Any ResNet architecture (any depth) will work here, as well as some of the other predating CNNs (the paper used VGG), but keep in mind memory constraints for inference.

    You can check how torchvision implements this below:

    image

    p6

    Attention

    Here is an interesting experiment on human perception conducted by Corbetta & Shulman to go along with this:

    Why?

    “One important property of human perception is that one does not tend to process a whole scene in its entirety at once. Instead humans focus attention selectively on parts of the visual space to acquire information when and where it is needed” — Recurrent Models of Visual Attention

    The great gain of using attention as a mechanism in the decoder is that the importantce of the information contained in the encoded latent space is held into account and weighted (as in across all pixels of the latent space). Namely, the attention lifts the burden of having a single dominant state taking guesses about what is the context of information taken from the features by the model. The results are actually quite astounding when compared to an attention-less network (see previous project).

    Where?

    Since the encoder is already trained and can output a competent feature map (we know that ResNet can classify images), the mechanism of attention is used to augument the behaviour of the RNN decoder. During the training phase, the decoder learns which parts of the latent space make up the “context” of an image. The selling point of this approach is based on the fact that the learning is not done in a simple, sequential manner, but some non-linear interpolations can occur in such a way that you could make a strong point for convincing someone that the model has actually “understood” the task.

    What kind?

    The original paper, as well as this implementation, use Additive / Bahdanau Attention

    The formula for the Bahdanau Attention is the essentially the following:

    alpha = tanh((W1 * e) + (W2 * h))

    where e is the output of the encoder, h is the hidden previous state of the decoder, and W1 and W2 are trainable weight matrices, producing a single number. (Note that the original paper also used tanh as a preactivation before softmax. This implementation instead uses ReLU.

    Additive attention is a model in and of itself, because it is in essence just a feed forward neural network. This is why it is built as an nn.Module class and inherits a forward call.

    But how does Attention actually work here?

    The paper itself cites Bahdanau, but does not go in depth on the reasoning behind this architecture. Here is how to make sense of it:

    The matrices W1 and W2 have the purpose to project the encoder features and the hidden state of the decoder into the same dimensionality so that it can add them.

    Adding them element-wise means the model is forced to minimize the loss for the features of the image as well as it’s captions, so it “must find” some connection between them.

    As attention is going to be non-linear, this is why we activate the sum using ReLU or tanh. The result is going to be squeeze into a single neuron, than, once softmax-ed will hold the probability of each neuron bein worth “attending to”. Notice that the features of the encoder are expressed in number of pixels, not W x H, as it was passed through a view before the attention call. This means that the single neuron computation is done for all the pixels in the annotation vector.

    Below is a gif from TensorFlow playground that serves as a simplified example:

    tfplay

    For the two features of the data, the X and Y coordinates, we can use 4 neurons to learn 4 lines, one line per neuron. This is what the projection of the attention_dim is doing. The final neuron can just learn a linear combination of the previous 4 in the hidden layer. This is what the full_att layer is esentially doing by mapping the attention_dim neurons to a single one.

    Therefore, after getting the probability of each neuron to be attented to, we can multiply these probabilities with the pixel values themselves, and sum across that dimension. This is going to result in a weighted sum, and now this is exactly the context vector the paper is talking about. (When you sum across a dimension, say 196 for the number of pixels, you lose that dimension as it becomes 1, this is how the vectors are turned into a single vector, which can then be passed to the LSTM for computation)

    Here is a gif so you can find the concepts of the paper in code easier:

    attention

    p7

    Decoder

    I am using pretty much the same implementation proposed in the greatly elaborated Image Captioning repo with some caveats. Precisely:

    1. I do not use padded sequences for the captions
    2. I tailored tensor dimensions and types for a different pipeline (and dataset as well, the repo uses COCO 2014), so you may see differences
    3. I am more lax with using incomplete captions in the beam search and I am also not concerned with visualizing the attention weights

    The aformentioned implementation is self sufficient, but I will further explain how the decoder works for the purpose of this particular project, as well as the statements above.

    The main idea of the model workflow is that the Encoder is passing a “context” feature to the decoder, which in turn produces an output. Since the decoder is an RNN, so the outputs will be given in sequences. The recurrent network can take into account the inputed features as well as its own hidden state.

    The attention weighted encoding is gated through a sigmoid activation and the resulting values are added to the embedding of the previous word. This concatenation is then passed as the input to an LSTMCell, along with the previous hidden state.

    p8

    The LSTM Cell

    The embedded image captions are concatenated with gated attention encodings and passed as the input of the LSTMCell. If this were an attentionless mechanism, you would just pass the encoded features added to the embeddings.

    Concatenation in code will look like this:

    self.lstm = nn.LSTMCell(embeddings_size + encoded_features_size, decoded_hidden_size)  

    The decoded dimension, i.e. the hidden size of the LSTMCell is obtained by concatenating the hidden an cell states. This is called a joint embedding architecture, because, well, you are smashing them both into the same vectorized world representation.

    hidden_state, cell_state = self.lstm( torch.cat([embeddings[:batch_size_t, t, :], attention_weighted_encoding], dim=1),  # input
                                          (hidden_state[:batch_size_t], cell_state[:batch_size_t]) )  # hidden

    The cell outputs a tuple made out of the next hidden and cell states like in the picture down below.

    The intuition and computation behind the mechanism of the long short term memory unit are as follow:

    The cell operates with a long term memory and a short term one. As their names intuitively convey, the former is concerned with a more general sense of state, while the latter is concentrated around what it has just seen.

    In the picture up above as well as in this model, h represents the short term memory, or the hidden state, while c represents the long term memory, or the cell state.

    1. The long term memory is initially passed through a forget gate.The forget factor of this gate is computed using a sigmoid, which ideally behaves like a binary selector (something either gets forgotten [0] or not [1]. In practice, most values will not be saturated so the information will be somewhat forgotten (0,1). The current hidden state or short term memory is passed through the sigmoid to achieve this forget factor, which is then point-by-point multiplied with the long term memory or cell state.
    2. The short term memory will be joined by the input event, x (which represents what the cell has just seen/experienced) in the input gate, also called the learn gate. This computation is done by gating both the input and the hidden state through an ignore gate. The ignore factor of the gate is represented by a sigmoid to again ideally classify what has to be ignored [0] and what not [1]. How much is to be ignored is then decided by a tanh activation.
    3. The long term memory joined by the newly aquired information in the input gate is passed into the remember gate and it becomes the new cell state and the new long term memory of the LSTM. The operation is a point-by-point addition of the two.
    4. The output gate takes in all of the information from the input, hidden and cell state and becomes the new hidden state and short term memory of the network. The long term memory is passed through a tanh while the short term memory is passed through a sigmoid, before being multiplied point-by-point in the final computation.

    Teacher Forcing

    You may notice in the gif below that, during training, we are decoding every time based on the embeddings, which are the training labels themselves, instead of using the embeddings only for the first computation and then sending in the output predictions, like they did in Show and Tell. This is called Teacher Forcing, and you can imagine that it definitely speeds up the learning process:

    teacherforcing

    Now we have a new problem. What this means is that the model is going to memorize the captions by heart for each image, because the only prediction that minimizes the loss word for word for a given caption is going to be the exact same sentence.

    Then why are we doing this? Here is the fascinating part: the model is not learning semantics and compositionality during training, but you can notice it is learning the alphas, which means it will remember what each word is supposed to look like in an image representation. This is why we are not calling the forward function during inference, that would be useless. What the authors are doing instead is using a beam search algorithm to form sentences different from the training labels, and you can find that in the sample function. This is the function you would call during inference.

    p9

    Training the model

    To train this model run the train.py file with the argument parsers tailored to your choice. My configuration so far has been something like this:

    embed_size = 300  # this is the size of the embedding of a word, 
                      # i.e. exactly how many numbers will represent each word in the vocabulary.
                      # This is done using a look-up table through nn.Embedding 
    
    attention_dim = 300  # this is the size of the full length attention dimension,
                         # i.e. exactly how many pixels are worth attenting to. 
                         # The pixels themselves will be learned through training
                         # and this last linear dimension will be sotfmax-ed 
                         # such as to output probabilities in the forward pass.
    
    decoder_dim = 300  # this is the dimension of the hidden size of the LSTM cell
                       # and it will be the last input of the last fully connected layer
                       # that maps the vectorized words to their scores 

    Now, there is no reason to keep all three at the same size, but you can intuitively see that it makes sense to keep them around the same range. You can try larger dimnesions, but keep in mind again hardware limitations, as these are held in memory.

    The rest of the parsed arguments are:

    dropout = 0.5  # the only drop out is at the last fully connected layer in the decoder,
                   # the one that outputs the predictions based on the resulted hidden state of the LSTM cell
                   
    num_epochs = 5  # keep in mind that training an epoch may take several hours on most machines
    
    batch_size = 22  # this one is as well depended on how many images can your GPU hold at once
                     # I cannot go much higher, so the training will take longer
    
    word_threshold = 6  #  the minimum number of apparitions for a word to be included in the vocabulary
    
    vocab_from_file = False  # if this is the first time of training / you do not have the pickle file,
                             # then you will have to generate the vocabulary first
                           
    save_every = 1  # save every chosen epoch
    
    print_every = 100  # log stats every chosen number of batches

    The loss function is CrossEntropyLoss and should not be changed as this is the only one that makes sense. Captioning is just multi-label classifcation.

    The train_transform the images go through before being passed to the encoder is pretty standard, using the ImagNet mean and std values.

    Since the input sizes here do not vary it may make sense to set:

    torch.backends.cudnn.benchmark = True  # optimize hardware algorithm

    p10

    Beam Search

    In the sample function of the decoder, there is an input parameter called k. This one represents the number of captions held into consideration for future exploration.

    The beam search is a thing in machine translation, because you do not always want the next best word, as the word that comes after that may not be the overall best to form a meaningful sentence.

    Always looking for the next best is called a greedy search, and you can achieve that by setting k = 1, such as to only hold one hypothesis every time.

    Again, keep in mind that, provided you have one, this search will also be transfered to your graphics card, so you may run out of memory if you try to keep count of too many posibilities.

    That means you may sometimes be forced to either use a greedy search, or break the sentences before they finish.

    I’ll leave you with this visual example on how beam search can select two nodes in a graph instead of only one.

    Here is a comparison of how the model behaves using a beam width of 1 (i.e. greedy search) vs one of 10:

    k1

    k10

    You can definitely see that k=1 achieves a higher FPS rate, but at the cost of accuracy, while the k=10 beam is more accurate, but at a performance cost, as the k possibilities are held on the GPU.

    p11

    YOLO and the Perspective Expansion

    Trying to output a caption for each frame of a video can be painful, even with attention. The model was trained on images from the COCO dataset, which are context rich scenarios, focused mainly on a single event, and thus will perform as such on the testing set.

    But “real life” videos are different, each frame is related to the previous one and not all of them have much going on in one place, but rather many things happening at once.

    • For this reason, I use a tiny YOLOv4 model to get an initial object of interest in the frame.
    • A caption is then generated for the region of interest (ROI) bounded by the YOLO generated box
    • If the prediction is far off the truth (no word in the sentence matches the label output by the detector), the algo expands the ROI by a given factor until it does or until a certain number of tries have been made, to avoid infinite loops
    • Using the newly expanded ROI, the model is able to get more context out of the frame
    • As you can see in the examples, the expansion factor usually finds its comfortable space before reaching a full sized image
    • That means there are significant gains in inference speeds and better predictions
    • Much like in Viola Jones, this model expands, but not when being correct.
    • Instead, it grows by making obvious mistakes, and in fact relies on it to give its best performance in terms of context understanding.

    p12

    Inference Pipeline

    I provided some model pruning functions in the pipeline.py file, both structured and unstructured (global and local), but I use neither and do not recommend them as they are now. You could achieve faster inference by cutting out neurons or connections, but you will also hinder the performance.

    I highly avoid structured pruning (both L1 and L2), as it will just wipe out most of the learned vocabulary, at no speed gains.

    Example:

    a man <unk> <unk> <unk> a <unk> <unk> <unk> <unk> .
    a man <unk> <unk> <unk> a <unk> <unk> <unk> .
    a <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> .
    a <unk> <unk> <unk> <unk> <unk> <unk> <unk> .
    

    While unstructured (both local and global) pruning is safer:

    a man on a motorcycle in the grass .
    a motorcycle parked on the side of the road .
    a man on a skateboard in a park .
    a person on a motorcycle in the woods .
    

    But no more performant in terms of speed

    Local pruning works layer by layer across every layer, while global pruning wipes across all layers indiscriminately. But for the purpose of this model, they both produce no gain.

    Unstructured pruning is always L1, because the weights are sorted one after the other.

    the JIT compiler can be used to increase the performance using the optimized_execution. However, this does not always result in a smaller model, and it could in fact make the network increase in size.

    Neither torch.jit nor onnx converters can be used on the decoder, because it is very customized, and these operations for now require strong tensor typing, and are not very permissive to custom architectures, so I resorted to only tracing the ResNet encoder (which also cannot be inferenced using onnxruntime, because of the custom average pooling layer).

    As you can start to see, there are not really any out of the box solutions for these types of things yet.

    The rest of the inference pipeline just loads the state_dicts of each model and runs the data stream through them using a pretty standard test_transform and dealing with the expansion of the ROI.

    p13

    Running the model

    To test the model you can run the run.py file by parsing the needed arguments.

    Since the prediction of the net relies on teacher forcing, i.e. using the whole caption for inference regardless of the last generated sequence, the whole vocabulary is needed to test the model, meaning that the vocab.pkl file has to be used, as well as the dataset.

    I also cannot provide the encoder here as there are size constraints, but any pretrained resnet will work (do make sure to behead it first if you choose to try this out).

    The options for running the model are as follow:

    --video  # this is an mp4 video that will be used for inference, I provide one in the video folder
    --expand  # this is the expanding ratio of the bounding box ROI after each mistake
    --backend  # this is best set to 'cuda', but be weary of memory limitations
    --k  # this is the number of nodes (captions) held for future consideration in the beam search
    --conf  # this is the confidence threshold for YOLO
    --nms  # this is the non-maximum suppression for the YOLO rendered bounding boxes

    YOLO inference is done using the dnn module from OpenCV.

    p14

    Hardware and Limitations

    My configuration is the following:

    I am using:

    • a turing Geforce GTX 1660 TI with 6GB of memory (CUDA arch bin of 7.5)
    • CUDA 11.7
    • cuDNN 8.5 (so that it works with OpenCV 4.5.2)

    Be aware that when building OpenCV there will be no errors if your pick incompatible versions. However, unless everything clicks, the net will refuse to run of the GPU

    Using the computation FPS = 1 / inference_time, the model is able to average 5 frames per second.

    p15

    Future outlook and goals

    What I am currently looking into is optimization.

    The current model is working, but in a hindered state. With greater embeddings and a richer vocabulary the outputs can potentially be better. Training in larger batches will also finish faster.

    For this reason, I am now currently working on Weight Quantization and Knowledge Distillation.

    I am also currently looking into deployment tools using ONNX.

    These are both not provided off the bat for artificial intelligence models, so there is really no go-to solution. I will keep updating the repository as I make progress.

    I am also playing around with the Intel Neural Compute Stick and the OpenVINO api to split the inference of the different networks away from running out of CUDA memory.

    p16

    Some more examples

    Notice how in the motorcycle example the ROI expands until it can notice there is not only one, but a group of people riding motorcycles, something object detection itself is incapable of accomplishing.

    Shift In Perspective
    p1m p2m p3m

    p1

    The Big Picture
    p1 p2 p3

    lambo

    Multi Purpose
    p1 p2

    Context Collector

    Based on the original work:

    @misc{https://doi.org/10.48550/arxiv.1502.03044,
      doi = {10.48550/ARXIV.1502.03044},
      url = {https://arxiv.org/abs/1502.03044},
      author = {Xu, Kelvin and Ba, Jimmy and Kiros, Ryan and Cho, Kyunghyun and Courville, Aaron and Salakhutdinov, Ruslan and Zemel, Richard and Bengio, Yoshua},
      keywords = {Machine Learning (cs.LG), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
      title = {Show, Attend and Tell: Neural Image Caption Generation with Visual Attention},
      publisher = {arXiv},
      year = {2015},
      copyright = {arXiv.org perpetual, non-exclusive license}
    }

    and Repo

    Bloopers

    I think there is a big Ferrari in the middle of this scene, and it should be the center of attention. Not sure though.

    blooper

    Visit original content creator repository https://github.com/AndreiMoraru123/ContextCollector
  • dog-project

    Project Overview

    Welcome to the Convolutional Neural Networks (CNN) project in the AI Nanodegree! In this project, you will learn how to build a pipeline that can be used within a web or mobile app to process real-world, user-supplied images. Given an image of a dog, your algorithm will identify an estimate of the canine’s breed. If supplied an image of a human, the code will identify the resembling dog breed.

    Sample Output

    Along with exploring state-of-the-art CNN models for classification, you will make important design decisions about the user experience for your app. Our goal is that by completing this lab, you understand the challenges involved in piecing together a series of models designed to perform various tasks in a data processing pipeline. Each model has its strengths and weaknesses, and engineering a real-world application often involves solving many problems without a perfect answer. Your imperfect solution will nonetheless create a fun user experience!

    Screenshots from Submission

    alt tag

    alt tag

    alt tag

    alt tag

    alt tag

    Project Instructions

    Instructions

    1. Clone the repository and navigate to the downloaded folder.

      	git clone https://github.com/udacity/dog-project.git
      	cd dog-project
      
    2. Download the dog dataset. Unzip the folder and place it in the repo, at location path/to/dog-project/dogImages.

    3. Download the human dataset. Unzip the folder and place it in the repo, at location path/to/dog-project/lfw. If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.

    4. Download the VGG-16 bottleneck features for the dog dataset. Place it in the repo, at location path/to/dog-project/bottleneck_features.

    5. Obtain the necessary Python packages, and switch Keras backend to Tensorflow.

      For Mac/OSX:

      	conda env create -f requirements/aind-dog-mac.yml
      	source activate aind-dog
      	KERAS_BACKEND=tensorflow python -c "from keras import backend"
      

      For Linux:

      	conda env create -f requirements/aind-dog-linux.yml
      	source activate aind-dog
      	KERAS_BACKEND=tensorflow python -c "from keras import backend"
      

      For Windows:

      	conda env create -f requirements/aind-dog-windows.yml
      	activate aind-dog
      	set KERAS_BACKEND=tensorflow
      	python -c "from keras import backend"
      
    6. Open the notebook and follow the instructions.

      	jupyter notebook dog_app.ipynb
      

    NOTE: While some code has already been implemented to get you started, you will need to implement additional functionality to successfully answer all of the questions included in the notebook. Unless requested, do not modify code that has already been included.

    Amazon Web Services

    Instead of training your model on a local CPU (or GPU), you could use Amazon Web Services to launch an EC2 GPU instance. Please refer to the Udacity instructions for setting up a GPU instance for this project.

    • SSH into the EC2 GPU Instance

       ssh aind2@<EC2_IPv4_Public_IP>
      
    • Clone and Activate environment

       git clone https://github.com/ltfschoen/dog-project; cd dog-project
       source activate aind2;
      
    • Fetch and unzip Dog and Human datasets

       wget https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip; unzip dogImages.zip; rm dogImages.zip;
      
       wget http://vis-www.cs.umass.edu/lfw/lfw.tgz; tar -xvzf lfw.tgz; rm lfw.tgz
      
    • Fetch and unzip VGG-16 Bottleneck and ResNet50 Bottleneck

       cd bottleneck_features; wget https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG16Data.npz; cd ..
      
       cd bottleneck_features; wget https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogResnet50Data.npz; cd ..
      
    • Start the Jupyter Notebook

       jupyter notebook --ip=0.0.0.0 --no-browser
      
    • Copy/Paste the URL into your browser (i.e. http://0.0.0.0:8888/?token=3156e...) but REPLACE the 0.0.0.0 with the “IPv4 Public IP” from AWS EC2 GPU instance shown in the EC2 Dashboard

    • Click dog_app.ipynb in the browser to edit the notebook

    • IMPORTANT NOTE:

      • STOP the EC2 GPU Instance when not in use since “p2.xlarge” GPU Instance costs ~AU$1.5 per hour to run according to EC2 On-Demand pricing
      • TERMINATE when no longer using since otherwise may be subjected to EBS Storage costs

    Copying data from AWS GPU Instance to Local Computer

    • Run the following on local machine:
    scp -r aind2@<IPv4 Public IP>:/home/aind2/dog-project /Users/Ls/Desktop
    

    Evaluation

    Your project will be reviewed by a Udacity reviewer against the CNN project rubric found here. Review this rubric thoroughly, and self-evaluate your project before submission. All criteria found in the rubric must meet specifications for you to pass.

    Project Submission

    When you are ready to submit your project, collect the following files and compress them into a single archive for upload:

    • The dog_app.ipynb file with fully functional code, all code cells executed and displaying output, and all questions answered.
    • An HTML or PDF export of the project notebook with the name report.html or report.pdf.
    • Any additional images used for the project that were not supplied to you for the project. Please do not include the project data sets in the dogImages/ or lfw/ folders. Likewise, please do not include the bottleneck_features/ folder.

    Alternatively, your submission could consist of the GitHub link to your repository.

    Project Rubric

    Files Submitted

    Criteria Meets Specifications
    Submission Files The submission includes all required files.

    Documentation

    Criteria Meets Specifications
    Comments The submission includes comments that describe the functionality of the code.

    Step 1: Detect Humans

    Criteria Meets Specifications
    Question 1: Assess the Human Face Detector The submission returns the percentage of the first 100 images in the dog and human face datasets with a detected human face.
    Question 2: Assess the Human Face Detector The submission opines whether Haar cascades for face detection are an appropriate technique for human detection.

    Step 2: Detect Dogs

    Criteria Meets Specifications
    Question 3: Assess the Dog Detector The submission returns the percentage of the first 100 images in the dog and human face datasets with a detected dog.

    Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

    Criteria Meets Specifications
    Model Architecture The submission specifies a CNN architecture.
    Train the Model The submission specifies the number of epochs used to train the algorithm.
    Test the Model The trained model attains at least 1% accuracy on the test set.

    Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)

    Criteria Meets Specifications
    Obtain Bottleneck Features The submission downloads the bottleneck features corresponding to one of the Keras pre-trained models (VGG-19, ResNet-50, Inception, or Xception).
    Model Architecture The submission specifies a model architecture.
    Question 5: Model Architecture The submission details why the chosen architecture succeeded in the classification task and why earlier attempts were not as successful.
    Compile the Model The submission compiles the architecture by specifying the loss function and optimizer.
    Train the Model The submission uses model checkpointing to train the model and saves the model with the best validation loss.
    Load the Model with the Best Validation Loss The submission loads the model weights that attained the least validation loss.
    Test the Model Accuracy on the test set is 60% or greater.
    Predict Dog Breed with the Model The submission includes a function that takes a file path to an image as input and returns the dog breed that is predicted by the CNN.

    Step 6: Write your Algorithm

    Criteria Meets Specifications
    Write your Algorithm The submission uses the CNN from Step 5 to detect dog breed. The submission has different output for each detected image type (dog, human, other) and provides either predicted actual (or resembling) dog breed.

    Step 7: Test your Algorithm

    Criteria Meets Specifications
    Test Your Algorithm on Sample Images! The submission tests at least 6 images, including at least two human and two dog images.
    Question 6: Test Your Algorithm on Sample Images! The submission discusses performance of the algorithm and discusses at least three possible points of improvement.

    Suggestions to Make your Project Stand Out!

    (Presented in no particular order …)

    (1) Augment the Training Data

    Augmenting the training and/or validation set might help improve model performance.

    (2) Turn your Algorithm into a Web App

    Turn your code into a web app using Flask or web.py!

    (3) Overlay Dog Ears on Detected Human Heads

    Overlay a Snapchat-like filter with dog ears on detected human heads. You can determine where to place the ears through the use of the OpenCV face detector, which returns a bounding box for the face. If you would also like to overlay a dog nose filter, some nice tutorials for facial keypoints detection exist here.

    (4) Add Functionality for Dog Mutts

    Currently, if a dog appears 51% German Shephard and 49% poodle, only the German Shephard breed is returned. The algorithm is currently guaranteed to fail for every mixed breed dog. Of course, if a dog is predicted as 99.5% Labrador, it is still worthwhile to round this to 100% and return a single breed; so, you will have to find a nice balance.

    (5) Experiment with Multiple Dog/Human Detectors

    Perform a systematic evaluation of various methods for detecting humans and dogs in images. Provide improved methodology for the face_detector and dog_detector functions.

    Visit original content creator repository https://github.com/ltfschoen/dog-project