Angular Deep Dive

In Angular you define components and their templates in Angular’s syntax. The browser understands JavaScript. How does Angular translate all your components, bindings and templates to typescript and from typescript to JavaScript? This article contains the information I could find.

Links

Ahead-of-time (AOT) compilation
Explanation Video of the Angular Compiler
Angular Code on GitHub

The need for a Compiler

The answer to the question how Angular converts the Angular syntax to JavaScript is that Angular contains it’s own compiler. This compiler converts templates to TypeScript and feeds that TypeScript to a TypeScript compiler for finding type errors. It will then output messages for mistakes you did in writing your templates. This is necessary because Angular templates can contain logic such as referencing variables defined elsewhere, using pipes or using directives (ngIf, ngFor, ngSwitch, ngModel, ngStyle, …). The code generated for type checking templates is never going to be executed in the browser, it is purely for outputting errors to the user!

Also the compiler will generate typescript code for the components you write. This code will actually run inside the browser.

The need for a Runtime

The compiler takes a component definition including the template and after type checking (see above) turns it into a ComponentDefinition. The runtime can execute the ComponentDefinition inside the browser.

The runtime can understand the and execute the ComponentDefinitions. The question is, why is a ComponentDefinition not capable of running by itself as it is converted to JavaScript from TypeScript and JS is runnable in a browser!

The answer why a runtime is required is: 

CSS Animations

This post is supposed to be a beginners introduction to CSS 3 animations. There are two ways for animation in CSS 3, animations and transitions.

Differences between Animations and Transitions

This article sums it up nicely: https://www.peachpit.com/articles/article.aspx?p=2300569&seqNum=2

In general transitions are defined by a start and an end state. Animations can have an arbitrary amount or keyframes in between a start and a end state. Transitions are therefore suited for simpler use-cases, whereas animations are used when the requirement is complex.

Animations

https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Animations/Using_CSS_animations

The main use-case for animations is

For animations, a trigger is optional. A animation can start without a trigger for example right after the page loads.

Animations can be created via .css files or via the element.animate() javascript WebAPI (https://developer.mozilla.org/en-US/docs/Web/API/Element/animate).

Transitions

https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Transitions/Using_CSS_transitions

The main use-case for transitions is highlighting items such as navigation elements on hover. Instead of instantly switching the background color of such a navigation element to a darker shade, the transition can smoothly transition the color to the darker shade which gives the page a more relaxed and more organic feel.

A transition needs a trigger to run. This trigger can be the change of a CSS property or some JavaScript.

Transitions are added to CSS classes. A transition lists the properties of the same CSS class, that should be changed in a smooth animation. A animation always need a new value for a property so that a property can be animated as it transitions from the current value to the new value. The new value is not defined by the transition, in other words is not predefined. When a property, that is defined in the list of transitions, changes either by adding or removing a css class or using javascript, then the transition is triggered and will be applied so that the value is smoothly interpolated between the current and the new value.

Transitions are controlled using the transition-properties inside a CSS class. The transition properties are: transition-propertytransition-durationtransition-timing-function and transition-delay.

A shorthand notation is available that combines all properties above into a single line: transition: <property> <duration> <timing-function> <delay>;

Style a Image Slider

Introduction

By image slider, a component that displays a single image out of a set of images is meant. The slider contains controls to switch to a different image. The slider can be controlled via direct user input or indirectly via a timer that automatically operates the image slider.

The Purpose of a slider is similar to an automated marketing presentation or sales video. It is there to catch a visitor, generate attention and interest and to advertise an idea or product to a user that generally has no time to loose.

Features

Features of a slider are:

  • Display an array of images
  • Width, height, general responsiveness
  • Dotted button navigation, minimum, maximum amount of dots
  • Left-right arrow button navigation
  • Mouse- or Thumb Drag navigation (swipe left and right) 
  • Navigation wrap-around
  • Cooldown/Countdown timer that initiates the next transition automatically
  • User interaction with the dotted button navigation will interrupt the automated timer to give the user time. The timer will take over again after some time.
  • Transition default direction or pattern (To-Right, To-Left, Random, …)
  • Animation options for the transition between images (slide, shrink, grow, opacity, alpha, …)

Minimum Viable Product – MVP

Creating an MVP that contains the subset of all features that constitutes the bare minimum to be recognized by a user as a usable component that has benefit, is a strategy to arrive at a result without getting side-tracked and loosing focus.

Why is an MVP important? In short, the main problem is our limited experience when approaching a topic for the first time. Because of all the unknown road blocks a beginner will face on his learning journey, there is natural delay and natural feature creep. Features creep in because unknown requirements pop up and features have to be added to even get the most limited MVP working.

The image slider MVP will 

  • Display three images
  • Contain no navigation
  • Transition between images on click on the current image
  • Transition direction is fixed: to-right is used
  • Contains no wrap-around
  • The transition is not animated, the images just are exchanged
  • The slider is not responsive

MVP Implementation

The HTML markup contains the three slides

<div class="wrapper">
<div class="slides">

<div class="slide active">
First
<img src="../img/mountain-1.jpeg">
</div>

<div class="slide">
Second
<img src="../img/mountain-2.jpeg">
</div>

<div class="slide">
Third
<img src="../img/mountain-3.jpeg">
</div>

</div>
</div>

The CSS contains general styling for the slides which sets all slides into th e display:none state which hides all slides. in addition it contains a CSS class called active. This active class sets display:block on one of the slides to show that slide.

html, body {
width: 100%;
height: 100%;
}

.wrapper {
height: 100%;
}

.slides {
display: flex;
align-items: center;
justify-content: center;
}

.slide {
display: none;

width: 100px;
height: 100px;
background-color: red;
}

.slide.active {
display: block;
}

The JavaScript script registers a click listener on the slides container. In the click listener, the active slide is retrieved and based on the active slide, the indexes for the next slide are computed.

When the next index is computed, the active class is toggled both on the current and on the next slide, which will exchange the images.

function transition() {

// select the slides container element
let slidesElement = document.querySelector('.slides');

// select the NodeList of all slides
let slideElementsArray = slidesElement.querySelectorAll('.slide');

slidesElement.addEventListener('click', () => {
// select the active slide element
let activeSlideElement = slidesElement.querySelector('.active');

// use the prototype as a NodeList has no indexOf() method
let currentSlideElementIndex = Array.prototype.indexOf.call(slideElementsArray, activeSlideElement);

// find the next index
let nextSlideElementIndex = currentSlideElementIndex == slideElementsArray.length-1 ? currentSlideElementIndex : currentSlideElementIndex + 1;

// retrieve the next div element
let nextSlideElement = slideElementsArray[nextSlideElementIndex];

// toggle the active classes to display the next image
activeSlideElement.classList.toggle('active');
nextSlideElement.classList.toggle('active');
})
}

transition();

Style a FlipCard

Links

https://www.w3schools.com/howto/howto_css_flip_card.asp

Introduction

This post contains may notes on the example of a flip card from W3Schools here. The article is not written in a beginner friendly way, I personally feel. It could do with more text describing what each part of the markup and CSS does which this article tries to add.

FlipCards

A FlipCard is a rectangular area that has a front- and a back side. The 3D capabilities of CSS are used to turn the card around by 180 degrees to reveal the card’s flipside. This adds an interesting and interactive effect to a page and also saves some space for detailed information on the item displayed on the front of the card. 

Strategy

The HTML markup contains an outer div that is used to position the flip-card on the page.

Inside the outer div, there is an inner div which will rotate on hover. That inner div contains two nested divs. One is called front-side, the other one is called back-side. Both the front- and back-side are set to not render their back-facing side, that means, when the back-facing side faces the viewer, that side is not rendered by the browser.

Initially the nested front-side is not rotated or rotated by 0 degrees, whereas the back-side is initially turned around, so it is initially rotated by 180 degrees. The front- and back-side are not rotated any more from here on out, only the inner div is rotated.

HTML Markup

<div class="flip-card">
<div class="flip-card-inner">
<div class="flip-card-front">
<img src="https://picsum.photos/300/200" alt="Avatar" style="width:300px;height:100%">
</div>
<div class="flip-card-back">
<h1>John Doe</h1>
<p>Architect & Engineer</p>
<p>We love that guy</p>
</div>
</div>
</div>

You can see the outer flip-card to position the entire card and the inner flip-card that contains the front and back side divs.

CSS Styling

The outer flip-card contains the dimensions and the perspective attribute which introduces a real 3D rotation effect.

.flip-card {
background-color: transparent;
width: 300px;
height: 300px;
border: 0px solid #f1f1f1;
perspective: 1000px;
}

The inner flip-card has two styles, on normal style and one style on hover.

/* This container is needed to position the front and back side */
.flip-card-inner {
position: relative;
width: 100%;
height: 100%;
text-align: center;
transition: transform 0.8s;
transform-style: preserve-3d;
}

/* Do an horizontal flip when you move the mouse over the flip box container */
.flip-card:hover .flip-card-inner {
/* On hover, rotate the inner card which will rotate the front and backside with it */
transform: rotateY(180deg);
}

The nested front- and back-side share some styles but also have styles specifically for themselves. One important specific style is the initial rotate of either 0 degrees or 180 degress.

/* Position the front and back side */
.flip-card-front, .flip-card-back {
position: absolute;
width: 100%;
height: 100%;
-webkit-backface-visibility: hidden; /* Safari */
backface-visibility: hidden;
}

/* Style the front side (fallback if image is missing) */
.flip-card-front {
background-color: #bbb;
color: black;
/* initially not rotated */
transform: rotateY(0deg);
}

/* Style the back side */
.flip-card-back {
background-color: dodgerblue;
color: white;
/* initially rotated by 180 degree */
transform: rotateY(180deg);
}

Style a Checkbox as Toggle

Links

https://codepen.io/himalayasingh/pen/EdVzNL

What is this article about?

I found this wonderful codepen created by Himalaya Singh. In this pen, Himalaya takes a HTML checkbox and changes it into an ios style toggle button using pure CSS without any JavaScript. This is quite the useful codepen and this article contains my notes on how to best read and understand Himalaya’s CSS.

The CSS and HTML snippets in this article are not Himalaya’s original code (but still heavily inspired by it). I slightly modified the snippets during my analysis for the worse. So definitly check Himalaya’s original codepen after reading this article.

How it works

The overall strategy is to use a HTML input of type checkbox and then to hide it using an opacity of 0. That way the user cannot see the input but they can still interact with it.

In a second step, a so-called knob and a background layer are added to the toggle. The knob is moving from left to right and displays the checked state of the input. It also can contain a text. The layer acts as a visual border for the input. Both the knob and the layer contain a color.

The HTML input type checkbox has two states, checked and unchecked. CSS classes are used via a selector that selects both possible states. Within the CSS classes for each state, the knob and the layer are styled. A CSS transition is used to define how the styling changes when the input transitions between both of it’s states.

HTML and Styling 

A HTML input with type checkbox is created.

<div class="button r" id="button-1">
<input type="checkbox" class="checkbox">
<!--<div class="knobs"></div>-->
<!--<div class="layer"></div>-->
</div>

We’ll take care of the knob and the layer later.

The surrounding button is positioned.

.button
{
position: relative;
top: 50%;
width: 74px;
height: 36px;
margin: -20px auto 0 auto;
overflow: hidden;
}

Then the input element is styled. To hide the input, an opacity of 0 is used.

/* opacity 0 is entirely transparent, this hides the checkbox but lets the user interact with it still */
.checkbox
{
position: relative;
width: 100%;
height: 100%;
padding: 0;
margin: 0;
opacity: 0;
cursor: pointer;
z-index: 3;
}

At this point you will absolutely see nothing any more on your page. To add graphical representation back, let’s start by adding the knob.

It is important to start with the knob because the knob is what gives the layer content. Starting with the empty layer causes the layer to collapse completely. A collapsed div is basically invisible, hard to style and generally a source of confusion.

Adding the knob

In the HTML, activate the knobs div by removing the comment around it. Also add a CSS class that positions the knobs div within it’s positioned parent.

/* styles the div that is inserted below the input checkbox html element */
.knobs
{
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 2;
}

Style the two states checked and unchecked of the knobs div.

#button-1 .knobs:before
{
content: 'NO';
position: absolute;
top: 4px;
left: 4px;
width: 20px;
height: 10px;
color: #fff;
font-size: 10px;
font-weight: bold;
text-align: center;
line-height: 1;
padding: 9px 4px;
/*background-color: #03A9F4;*/
background-color: #f44336;
border-radius: 50%;
transition: 0.3s cubic-bezier(0.18, 0.89, 0.35, 1.15) all;
}

/* Style for when the checkbox is checked */
#button-1 .checkbox:checked + .knobs:before
{
content: 'YES';
left: 42px;
background-color: #03A9F4;
/*background-color: #f44336;*/
}

The CSS selector above contains .checkbox:checked which is how the checkbox state is targeted using CSS. The first of the two states does not contain any state so this is the default unchecked state. A transition is contained in the first CSS class. The transition defines how the transition between both CSS classes is animated. This animation moves the knob from left to right and vice versa, it changes the text and the color.

Adding the Layer

To also style the layer, first uncomment the layer div in the HTML. Then, using the same principle as for the knobs, define two CSS classes one per checkbox state that define the appearance of the layer in each state and how the transition between the two states is animated.

/* The layer is the background that the slider knob is displayed inside.
It is inserted as a separate div below the input checkbox html element
The layer provides the visual appearance and the outline border around the ckeckbox. */
.layer
{
--dummy-style: foo; /**/
position: absolute;
border-radius: 100px;
top: 0;
right: 0;
bottom: 0;
left: 0;
width: 100%;
background-color: #fcebeb;
/*background-color: #ebf7fc;*/
/*background-color: #f44336;*/
transition: 0.3s ease all;
z-index: 1;
}

/**
The general sibling combinator is made of the "tilde" (U+007E, ~)
character that separates two sequences of simple selectors.
The elements represented by the two sequences share the same parent in the document tree
and the element represented by the first sequence precedes (not necessarily immediately)
the element represented by the second one.
*/
#button-1 .checkbox:checked ~ .layer
{
/*background-color: #fcebeb;*/
background-color: #ebf7fc;
/*background-color: #03A9F4;*/
}

#button-1 .knobs, #button-1 .knobs:before, #button-1 .layer
{
transition: 0.3s ease all;
}

Summary and Next Steps

The toggle works and looks awesome. Things that come to mind are, how to you get translated text onto the toggle? Maybe it is easier to not have any text on the knob of the toggle to save a lot of work. Also the CSS probably should be translated to SCSS if that is what your project uses. Another important part is to use the toggle in a form element of your framework of choice. A test has to be made if the input works nicely with Angular, Vue and React.

Organizing JavaScript

Links

https://en.bem.info/methodology/
https://www.webcomponents.org/specs

Prolog

This post is about JavaScript usage on a larger scale. Writing small snippets is one thing and you can get away with almost everything. Writing larger applications becomes a question of scaling. You need architectural patterns to linearily achieve progress when working on large applications.

Scope and Modularity

JavaScript has Scopes. The global scope is the parent scope that always exists. Without any further preparation, variables will be assigned to the global scope. That means all variables from code in script-tags or imported .js files will by default live in the global scope unless local scope is introduced to house variables.

The issue with global scope is that variable names can clash and code ultimately stops working. It prevents good organization and code reuse. Modularity and therefore local scope is needed.

Hoisting with the var Keyword

Hoisting means that variable declarations that make use of the var keyword are moved to the top of the current script (tag or file) or if they are defined within a function, they are used to the top of the current function. Only variable declarations are hoisted, initializations are not moved to the top. That means a hoisted variable can be undefined because the initialization is not moved to the top.

The effect is the same as seen in older versions of C. Variables can only be declared at the top of a function. Hoisting is the automatic process of moving variable declarations to the top of the function or script tag or script file.

Hoisting with the let Keyword

Variables defined with the let keyword are not hoisted all the way to the top of the function or script (tag or file) as is the case with the var keyword but they are hoisted within the nearest scope exclusively. Again, their initialization is not hoisted, so they can be undefined.

The nearest scope is defined by everything that is enclosed in curly braces. That means if-statements, function bodies, loop bodies and even an artificial block defined by an opening and a closing curly brace define local scope.

How to use var, let and const

Code is easy to read if you use const everywhere. If a variable value has to change, use let instead of const. Never use var.

Functions define local Scope

Each function introduces new local scope defined by the function body.

In the very good book Mastering modular JavaScript by Nicolás Bevacqua, the author lists three ways functions (to be precise, the pattern is called IIFE (Immediately-Invoked Function Expression)) have been used a few years ago to define local scope similar to modules:

(function() {
console.log('IIFE using parenthesis')
})()

~function() {
console.log('IIFE using a bitwise operator')
}()

void function() {
console.log('IIFE using the void operator')
}()

Blocks define local Scope

Not only functions define local Scope but blocks do. A function is a special case of a block.

In ES6 this code is possible

{  // open block    
  let tmp = ···;
···
} // close block

The let keyword creates a variable in the local scope (as opposed to the var keyword which  creates a variable that is hoisted to the top of the function which potentially changes the scope it is defined in.

Modules define local Scope

ECMAScript 2015 (ES6) introduced modules (ECMAScript Modules (ESM)) as part of the JavaScript language. In node (which uses CommonJS by default, the ESM system is available only when specifying –experimental-modules and using the .mjs extension for modules)

Before ES6, custom libraries (CommonJS, RequireJS) provided module functionality for JavaScript programmers. Those custom libraries are still used extensively today. Probably because it is a massive undertaking to refactor all existing code to ES6.

So now there is a mix of CommonJS, RequireJS and ES6 modules used in the wild. The syntax for ES6 modules (export, import) differs from the CommonJS and RequireJS syntax (use of the exports-object, require).

While CommonJS is the dominant module system in node, RequireJS is more geared towards browsers. RequireJS implements the Asynchronous Module Definition (AMD) standard. Typescript adopted the ES6 module syntax from the start.

Later Browserify allowed to bundle node bundles into a browser-ready format and allowed the use the node package manager and all it’s modules in the development of web applications that run in a browser. Today webpack is the most widely used bundler and mostly took over Browserify.

Across all possible module systems, the common parts are that every file is a module and a module cannot be spread across multiple files. Every module has it’s own scope and context.

Object Orientated Programming (OOP)

To me, object orientation is about combining state and functions that allow you to access and manipulate that state in one place that you can easily find even in large applications.

Even after weeks of not working on a project, it is clear that a persons name and address are stored in the Person class. It just makes sense and is intuitive.

In Java and C++ for example, first a Class has to be defined. The class definition controls which member variables and which function a class has. At runtime variables (aka. objects or instances) are created from the class. No member variables or functions can be added or removed.

JavaScript is different in many ways. An object can be created without a class definition. At runtime, member variables and functions can be added and removed from objects.

OOP using Object Initializers

let personInstance = {
name: 'person1',
age: 30
}

// location A

personInstance.isBlocked = true;

// location B

delete personInstance.isBlocked;

// location C

In the example above, an object is defined (using JavaScript Object Notation (JSON)) without a class and without a constructor function (see below)! Instead a so called object initializer is used, see https://developer.mozilla.org/de/docs/Web/JavaScript/Guide/Working_with_Objects

The object initializer is a block of JSON that defines the object, it’s member variables and it’s functions. For alternatives to object inititalizers look at constructor functions and classes!

In the rest of the script, a member variable ‘isBlocked’ is added and initialized and then removed again. console.logging personInstance.isBlocked at location A yields ‘undefined’ as the isBlocked member is not yet part of the object. At location B, logging will yield true as expected. At location C, logging will again yield ‘undefined’ as the member was removed.

let app = {

settings: {
container: document.querySelector('.calendar'),
calendar: document.querySelector('.front'),
days: document.querySelectorAll('.weeks span'),
form: document.querySelector('.back'),
input: document.querySelector('.back input'),
buttons: document.querySelector('.back button')
},

init: function() {
console.log('container: ', this.settings.container);
console.log('calendar: ', this.settings.calendar);
console.log('days: ', this.settings.days);
console.log('form: ', this.settings.form);
console.log('input: ', this.settings.input);
console.log('buttons: ', this.settings.buttons);
},
}

app.init();

The code above combines data (settings) and functions (init()) into an object (app). Then it calls a method on the app instance. The call will output the state stored in that instance.

There is a shorthand notation to add a function to an object. Instead of using the key value pair notatino ( functionName: function { … } ) you can use functionName() { … }. In the example above init() { … } instead of init: function { … }

The keyword: this

In JavaScript, this used inside a function, refers to the object that called the function. If the function is a member function of an object, that behaviour is not changed. In JavaScript, the this keyword has no relation to the objects instead it has a relation to the caller.

In Java and C++, this used inside a member function refers to the object instance. Here, the this keyword has no relation to the caller!

In DOM event handlers such as click handlers, this refers to the object that emitted the event.

When a function is defined in global scope and the script is executed inside a browser, this refers to the window object. When strict mode is enabled in addition, the this keyword in global scope is undefined and does not point to the window object!

With arrow functions, the behaviour of the this keyword is different than for normal functions. So arrow functions are not syntactic sugar but they have their own characteristics. Arrow functions do not have an own this pointer! Because they have no own this pointer, when calling an arrow function, they do not bind the this pointer to the context of the call! That in turn means that arrow functions do not shadow a this pointer that might exist before calling the arrow function. Because you can still use the this keyword inside an arrow function, the question remains, what the this pointer refers to inside an arrow function! This good news for OOP is that this inside an arrow function defined inside an object has a relation to the object that the arrow function is defined inside! this in an arrow function allows you to access the member variables and functions of the object instance!

When the this keyword is used inside a constructor function, this points to the newly created object instance.

Constructor function is

https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Functions/Arrow_functions

CONSTRUCTOR FUNCTIONS

Instead of defining objects using the JSON notation, there is another way called constructor functions.

function Person(first, last, age, eye) {
  this.firstName = first;
  this.lastName = last;
  this.age = age;
  this.eyeColor = eye;
this.name = function() {
    return this.firstName + " " + this.lastName;
  };
}

const myFather = new Person("John", "Doe", 50, "blue");
const myMother = new Person("Sally", "Rally", 48, "green");

console.log(myFather.name());

The example is taken from here: https://www.w3schools.com/js/js_object_constructors.asp

A function called Person is defined and later used in conjunction with the new keyword to arrive at instance variables myFather, myMother.

The Person() function is referred to as the constructor function. Inside the constructor function, the this keyword actually points to the instance that is just being created. 

OOP with Prototypes

Funtions can be added to the prototype and can then be called later.

OOP With Classes

The class keyword in javascript is syntactic sugar for JavaScript’s prototype system. That means the compiler or interpreter transforms the keywords into other JS features so the programmer is freed from the task.

Classes where introduced in ES6 (ECMAScript 2015)

Epilog

Especially with languages, I am no proficient in, I personally find myself in a situation where my progress keeps getting slower and slower over a day of developing software, just because I keep battling the language and how to organize the code when the application gets bigger. It gets slower until I get to a complete stop and I have to give up for the day. The next day there is even less progress. It becomes similar to wading through a swamp and you are finally so tired that you give up and the swamp swallows you. 

With languages I am proficient in, I find that I am not blocked by the language itself. I am blocked by medium to hard problems I have to solve but the programming language is at tool that makes it easier to solve the problems rather than slowing me down.

When you find yourself in a situation where the language is slowing you down, you have to realize that your programming skills in that language are lacking and you have to go back to school.

This article showed ways to use JavaScript which are applicable to larger problems

webpack

webpack is a build system for JavaScript which requires node to run. It refers to itself as a static module bundler. It views your source files as modules and organizes modules and their dependencies inside a graph. It will output one or more bundles after traversing the graph. So modules in a dependency graph in, bundles out.

webpack uses one entry point which is similar to a main() function in a programming language as it marks the starting point of operation. The entry point, entry for short, is the module where webpack starts to traverse the dependency graph.

Loaders do Load Modules

In webpack, you import modules to build up the dependency graph. A module can be any file, as long as there is a loader for that type of file. By default, webpack understands JavaScript and JSON files and it can convert those into modules and add them to the dependency graphs via import statements.

Additional loaders allow webpack to understand other types of files, convert them into modules and add them into the dependency graph.

When webpack sees an import, it looks into it’s definitions of loaders and if it finds a matching loader, it applies that loader to the import. The module rules for loader definitions are contained in the webpack.config.js.

const path = require('path');

module.exports = {
  output: {
    filename: 'my-first-webpack.bundle.js',
  },
  module: {
    rules: [{ test: /\.txt$/, use: 'raw-loader' }],
  },
};

In the rule above, test defines which import regular expression the loader will match and the use-part defines the loader implementation to use when the regex matches. In this example, the raw-loader will be applied to all imported .txt files.

Ultimately, your resulting bundle or bundles will contain all the JavaScript, HTML, CSS, images and other files that you import as modules. That’s right, you treat CSS files and everything you need as a module when using webpack. You will in fact import CSS files! Pretty exciting concept if you ask me!

The Entry

webpack uses a configuration file webpack.config.js. Here you specify the entry:

module.exports = {  
entry: './path/to/my/entry/file.js',
};

The entry will be a file called ./src/index.js in most cases.

Plugins

Plugins are added via the webpack.config.js file. They are then called by the webpack compiler during compilation.

A example configuration is

const HtmlWebpackPlugin = require('html-webpack-plugin'); //installed via npm
const webpack = require('webpack'); //to access built-in plugins
const path = require('path');

module.exports = {
  entry: './path/to/my/entry/file.js',
  output: {
    filename: 'my-first-webpack.bundle.js',
    path: path.resolve(__dirname, 'dist'),
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        use: 'babel-loader',
      },
    ],
  },
  plugins: [
    new webpack.ProgressPlugin(),
    new HtmlWebpackPlugin({ template: './src/index.html' }),
  ],
};

here, the HTMLWebpackPlugin is used.

Example

First, create a node project.

cd dev/javascript
mkdir webpack_helloworld
cd webpack_helloworld
npm init -y
code .

Then install the dependencies.

npm install --save-dev webpack webpack-cli
npm install --save-dev html-webpack-plugin

Setup the files and folders.

insert a webpack.config.js next to the package.json.

const HtmlWebpackPlugin = require('html-webpack-plugin');

const path = require('path');module.exports = {
mode: 'development',
entry: './index.js',
output: {
path: path.resolve(__dirname, './dist'),
filename: 'index_bundle.js',
},
plugins: [new HtmlWebpackPlugin()],
};

This webpack.config.js requires a index.js file, so you have to create one next to the webpack.config.js. Inside the index.js file, just output some text.

console.log('webpack works! - Hello World!');
alert('webpack works! - Hello World!');

To start the webpack build, add a script in the package.json

"scripts": {    
"build": "webpack --config webpack.config.js",
"test": "echo \"Error: no test specified\" && exit 1"
}

Start the webpack compilation

npm run build

Now check your project folder. There is a dist folder generated for you. Inside that dist folder, the HtmlWebpackPlugin has generated a index.html file that imports a generated index_bundle.js. The index_bundle.js file contains all entry points and their dependencies defined in webpack.config.js. That means it will contain the code from index.js in this example.

Revisiting the Result

Now, webpack created a dist/index.html file for us and it bundle all entry modules (currently javascript files) and all their dependencies and imported them automatically into the dist/index.html file.

This is a wonderful situation for building a framework that generates all HTML markup programmatically. You could build all HTML via JavaScript’s DOM API from your javascript entry point. This is not necessarily what you want. If you want to use webpack and it’s hot reload feature to work with CSS and HTML, then you most likely want a HTML file you have full control over.

Using HTML with webpack

The question is, how do you add your own html markup into the generated index.html? The answer to this question is the HtmlWebpackPlugin’s template feature. Credits go to this solution: https://stackoverflow.com/questions/39798095/multiple-html-files-using-webpack/63385300 and the documentation https://github.com/jantimon/html-webpack-plugin/blob/main/docs/template-option.md

A template is a HTML file that you put into a src folder and that you configure in the HtmlWebpackPlugin in webpack.config.js:

const HtmlWebpackPlugin = require('html-webpack-plugin');
const path = require('path');

module.exports = {
mode: 'development',
entry: './index.js',
output: {
path: path.resolve(__dirname, './dist'),
filename: 'index_bundle.js',
},
plugins: [new HtmlWebpackPlugin({
filename: 'index.html',
template: 'src/index.html',
chunks: ['main']
})],
};

A valid template (src/index.html) looks like this:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Webpack App 2</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
</head>
<body>
<h1>Hewlo Wurl!</h1>
</body>
</html>

Webpack will first copy this file into the dist folder and then webpack will modify the copied file. Webpack will insert a script-tag into the header and that way it will import the compiled modules (bundles) into the html file copied from the template.

To try this out, build again (npm run build) and reopen the dist/index.html in your browser. You should first get an alert box which is contained in the entry module index.js (if you followed this example) and then you should see the custom HTML from the index template! Wonderfull, we are almost there!

The next question is, how to import CSS.

Using CSS with webpack

Let’s say you want to use ./src/index.css

h1 {
color: red;
}

First install the webpack css-loader and the style-loader

npm install --save-dev css-loader
npm install --save-dev style-loader

Next add a rule into webpack.config.js that makes webpack apply the css plugin to css files:

const HtmlWebpackPlugin = require("html-webpack-plugin");
const path = require("path");

module.exports = {
mode: "development",
entry: "./index.js",
output: {
path: path.resolve(__dirname, "./dist"),
filename: "index_bundle.js",
},
plugins: [
new HtmlWebpackPlugin({
filename: "index.html",
template: "src/index.html",
chunks: ["main"],
}),
],
module: {
rules: [
{
test: /\.css$/i,
use: ["style-loader", "css-loader"],
},
],
},
};

No this is where it gets a little weird, at least to my liking. You will use an import statement to import your css file into the entry. The import will make webpack apply the css-loader and style-loader to the css file.

import css from "./src/index.css";

console.log('webpack works!');
alert('test');

Now rebuild and reload the generated index.html. The header should be displayed in red color.

Using JSON files with WebPack

Say you have a JSON file on your hard drive that contains JSON data you need process in some JavaScript routines. You can import the JSON file using a Promise and process the data once the Promise is successfull.

import(
'./data/testdata.json'
).then(({default: testdata}) => {

let jsonOutputElement = document.getElementById("rawJson");
jsonOutputElement.innerHTML = JSON.stringify(testdata, undefined, 2);

// do whatever you like with your "jsonMenu" variable
//console.log('testdata: ', testdata);
initTreeStructure(testdata);
});

For webpack to load the JSON file, it uses a JSON-Loader, that not only reads the file to a string but it parses the JSON to a JavaScript object.

In earlier versions of webpack, a json loader had to be installed manually and added to the list of loaders in webpack.config.js

npm install json-loader --save-dev
module.exports = {
module: {
loaders: [
{
test: /.json$/,
loader: 'json-loader'
}
]
}
}

It seems that manually adding a JSON-loader in newer versions of webpack actually causes issues because the added JSON-loader conflicts with the onboard JSON-loader leading to parse errors during JSON parsing! I found that in the current version of webpack, it is sufficient to just import JSON files without installing and configuring any JSON-loader.

Using HTML components with WebPack

https://medium.com/hackernoon/using-html-components-with-webpack-f383797a5ca

Hot Module Replacement aka. Hot Reload

webpack can be instructed to watch your files for changes, compile and reload the page in the browser for you. That way the latest changes are available on save.

To enable hot reload, first install webpack-dev-server

npm install webpack-dev-server --save-dev

Now, edit package.json and add a serve script:

"scripts": {
"build": "webpack --config webpack.config.js",
"serve": "webpack serve",
"test": "echo \"Error: no test specified\" && exit 1"
},

Run the serve script

npm run serve

This will bring up a webpack development server with hot load capabilities. In the console, a URL is printed. You have to open that URL using a browser to get the hot loaded page. Do not just open the html page in your dist folder, this file will not be hotloaded. Your browser has to access the webpage from the hot loaded server!

Test it out, change your css or your JavaScript. The browser is instructed to reload the page and your changes are immediately reflected in the browser.

SMSC LAN9512 / SMSC LAN9514 dev board by matrixstorm / matrixprog

Stephan Bärwolf aka. Matrixstorm aka. matrixprog created a circuit board for the  SMSC LAN9512 / LAN9514 USB controller ethernet chip. This page describes my advances with his board.

About the LAN951X Chip

The company SMSC was aquired by Microchip and created the LAN9512 and LAN9514 chips. These chips are USB hub controllers and also contain an ethernet controller. They allow to convert USB transfers to ethernet packets. The LAN9512 supports 2 USB ports, the LAN9514 supports 4 USB Ports, the last digit determines the amount of supported USB ports.

The LAN951X is important because it is used in the early Raspberry Pi Models and the BeagleBone boards. When writing an operating system for those boards, it is required to have a working driver for the LAN951X controller.

An open source driver for the LAN951X exists in the linux operating system and also in the embedded Xinu codebase. A custom operating system can model those drivers.

About Matrixstorm’s Board

Matrixstorm’s Board is described on several forum entries and in one github repository.

https://www.mikrocontroller.net/topic/457051

https://www.mikrocontroller.net/topic/413047?goto=new#new

https://www.ebay.de/itm/LAN951X-Adapterboard-Basisplatine-inkl-Komponenten-OHNE-LAN951X/233193113581?_trkparms=aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20170511121231%26meid%3D31fbe704dece42cf9703a6459ca5f905%26pid%3D100675%26rk%3D5%26rkt%3D15%26mehot%3Dnone%26sd%3D233832330319%26itm%3D233193113581%26pmt%3D0%26noa%3D1%26pg%3D2380057&_trksid=p2380057.c100675.m4236&_trkparms=pageci%3Adb26b22d-501a-11eb-9290-c2d29be0b34e%7Cparentrq%3Ad7aca2f71760a9c98cd4a5aefff12fc4%7Ciid%3A1

https://github.com/dumpsite/LAN95XX-board

The second link contains images of a carrier board that contains Ethernet Jacks to which the LAN951X board is attached.

Next Steps…

  • Prepare the reflow oven.
  • Assemble the board.
  • Add interfaces (USB plug and ethernet connector)
  • Attach the entire board to a windows PC via USB for testing.

Using the ASIX USB Ethernet controller with the Teensy 4.1

This blog post documents how a ASIX USB Ethernet adapter can be connected to a Teensy 4.1 board. As USB ethernet adapter, the widely supported (Linux drivers exist) adapter from Olimex is used. It contains the ASIX USB ethernet controller chip.

The Teensy board contains native ethernet support. This article is not about the native ethernet on the teensy but rather about the native USB host controller support that the teensy has. This article describes how to connect a ASIX USB Ethernet adapter to the USB controller on the teensy 4.1 to send ethernet packets to an IP address.

The FNET library contains an example called ASIXEthernet_Test.ino. This example can be executed on the Teensy 4.1 via the Arduino IDE.

The FNET example imports the ASIX driver for the Teensy which is contained in this repository. So first, install the FNET library and the TeensyASIXEthernet library via .zip or via the library manager into your Arduino IDE.

The teensy 4.1 comes in several offers. One option is to buy a Teensy that already has pin headers soldered in by a professional so you do not have to solder anything. If you have a teensy 4.1 that has no pins soldered to the USB contacts on the board, solder five pins in. This page contains pictures on where the USB cables are connected on the Teensy 4.1 and this shows you where to solder the USB pin header in. If you turn the Teensy 4.1 around, the bottom of the PCB contains the pin names on the bottom silk screen. USB On the Go added a fifth pin. This fifth pin is just connected to ground on the host side of a normal USB connection. For this example, only four of the five pins are required. The pins are 5V (red), D- (white), D+ (green) and ground (black).

As usual soldering makes better contact than just loosely sticking pin headers into the holes. The contact is only really established through the solder! If your setup does not work (No USB or ethernet connection), check your soldering again! For myself, it only worked after fixing my solder work once.

The next step is to connect the ASIX USB ethernet controller to the USB pins. For that, you can purchase USB adapter cables that have headers on one side and a USB port connector (MAKE SURE TO NOT ACCIDENTLY BUY A USB PLUG CABLE BUT A USB PORT CABLE!) on the other side. A depiction of such a cable is given here. Plug in the ASIX USB ethernet adapter in either directly or indirectly via a USB hub if you like. Connect a ethernet wire to the adapter and connect it to your home network that should contain a DHCP server so that the FNET stack can retrieve an IP address.

In the Arduino IDE open the FNET example sketch ASIXEthernet_Test.ino. Compile it and upload it to the teensy 4.1. Do not forget to change the target hardware to the Teensy 4.1 board in your Arduino IDE. If you target the wrong board, the headers used in the example sketch will not be found.

Unplug and reconnect the Teensy to your computer via USB to give it power. Open the serial monitor and wait. The example sketch will start the FNET stack in the background, this will take some time and there is no output whatsoever that tells you to wait. Once the stack is done, it will retrieve a IP address via DHCP and output something similar to this:

SetMACAddress: 00001001B3BD
netif Initialized
Initializing DHCP
DHCP initialization done!
IPAddress: 0.0.0.0
SubnetMask: 0.0.0.0
Gateway: 0.0.0.0
DHCPServer: 0.0.0.0
State: 2
IPAddress: 192.168.0.10
SubnetMask: 255.255.255.0
Gateway: 192.168.0.1
DHCPServer: 192.168.0.1
State: 5

The LEDs inside the USB Ethernet adapter casing should begin to light up and flicker.

Once you see this output, it is time to instruct the Teensy to send ethernet packets. In order to do that, the example sketch uses the FNET benchmark code. You can start the benchmark by sending a command to the Teensy via the Serial Monitor’s send feature. The benchmark has several options so here is one example command:

benchtx -a 192.168.0.234 udp -m 1272

First of all: You have to send a space after the last character. So before sending this command, make sure there is a trailing space! The parser of the FNET benchmark code is very peculiar about the input it wants to accept! If there is no space, the command is rejected!

The benchmark tools command line is:

benchtx -a <remote_ip> [tcp|udp] [-m <message size>] [-mn <number of messages>]

For me, the last optional flag -mn never was accepted by the parser hence the example command above does not specify a message number. The default message number of 10000 messages is used. The message size default value is 1272. The example command uses UDP for no specific reason. The remote IP is the IP of the device in your local network that should receive the ethernet packets.

The FNET benchmark code will blast out 10000 packets as fast as it can not careing about packet loss or anything. It will not wait in between packets and it will output a single line of statistics about the burst send after it is done.

Megabytes: 0.587664  Seconds: 0.1250  KBits/Sec: 37610.4960

In a running instance of wireshark on the remote machine, you can see some of the 10000 packets arrive. If you have a means to count the incoming packets, you could check how many of the UDP packets actually made it and you could compute the success rate of the burst transfer.

Receiving Ethernet Packets

For receiving ethernet packets over the ASIX USB Ethernet Adapter, the code in the ASIX example has to be modified! The code contains a variable called MacAddress.

uint8_t MacAddress[6] = {0x00,0x50,0xB6,0xBE,0x8B,0xB4};

The MacAddress variable contains the six byte long MacAddress that is used to participate in an Ethernet Network. The MacAddress uniquely identifies the Ethernet Adapter in an Ethernet Network just as the IP-Address identifies a node in an IP Network.

In contrast to an IP Address, which has local scope (per subnet and local LAN) and which can be dynamically leased by a DHCP server and hence is not necessarily unique globally, a MacAddress is a globally unique identifier.

Every Ethernet capable device gets a unique MacAddress assigned by the vendor or manufacturer of the device. The Olimex Adapter has a unique MacAddress. This MacAddress differs from the MacAddress that is part of the ASIX example code!

Using the MacAddress from the ASIX example without change causes the example code to send Ethernet frames into the network that contain this incorrect MacAddress. Communication partners will then answer using that incorrect MacAddress. The Olimex Adapter will see the ethernet frames and it will compare it’s own MacAddress to the one contained in the frames. It will decide that those frames are directed at another participant because the MacAddresses do not match! It will then discard those frames and your code will not ever receive even a single frame!

To solve the problem, determine the MacAddress of your particular Olimex adapter and update the example code with that MacAddress. One way to determine the MacAddress is to plug the adapter into a windows host and on the command line, execute the ipconfig /all command with lists the MacAddress.

One tip for safety, keep you MacAddresses off the Internet, that means do not check in your MacAddress into a git repository, just as you do not check in any passwords into git repositories. If someone really wants to hurt you, commiting your MacAddress to git connects the MacAddress to your person or git account which opens up a way for attackers.

Sending and Receiving Speeds

For sending out Ethernet frames, your code can send as fast as possible because the performance of the Teensy 4.1 CPU and the speed of the USB controller and ethernet PHY naturally limit the amount of packets send. Packets will most likely not be lost during the send operation. The chips will process the packets when they get to it.

On the receiving side, I encountered problems. Sending from a MAC Book Pro which has a fast CPU and a really high quality ethernet chip can easily overpower the Teensy 4.1 with the Olimex Adapter attached. Packets where dropped and not received. I assume that because USB is a polled bus that if your system does not poll fast enough, ethernet frames are just dropped.

Another test showed that when connecting the Olimex Adapter to a powerfull business laptop running windows, the same effect can be observed. Even on capable hardware, the Olimex Adapter drops frames.

To test the situation, I used the code from https://www.vankuik.nl/2012-02-09_Writing_ethernet_packets_on_OS_X_and_BSD

Here, a byte array of data is sent to a MacAddress of your choice, which is the Olimex adapter’s MacAddress in this test. The byte array is larger that the maximum allowed length of a single ethernet frame hence the code constructs several frames and sends them out. In this test, four frames are send. The variable in the test is the amount of time the  code sleeps (usleep(uint microseconds)) between each of the four ethernet frames.

Using wireshark, on the sender’s side, it is checked how many frames are really sent by the test program. Using wireshark on the receiver’s side, it is checked how many packets are actually received.

The breaking point for the Olimex adapter seems to be a sleep time of 1 ms. Sleeping more than 1 ms causes the Windows Machine to receive every single frame correctly.

Going down to 1 ms and lower, causes the Windows machine to not receive all packets. Because this effect shows on the windows machine and on the Teensy, I have to assume that it is a problem with the Olimex USB adapter and not in the example code or in windows. It might also be because USB is a polled bus, but the Business laptop should be able to poll the USB Adapter faster than the Teensy but still the same effect is noticeable.

With a lossless protocol such as TCP, lost packets will be retransmitted and the transfer should be possible even if the sender overwhelmes the receiver. With a lossy ethernet or UDP connection, this is a real problem!

Conclusion

The amazing thing about all this is that you have the code for an open source TCP/IP stack (FNET) and a working ASIX driver (https://github.com/vjmuzik/TeensyASIXEthernet) that works over USB.

For learning about the USB protocol, I personally feel that this setup is a very motivating one because it allows you to learn about USB, TCP/IP and the low-level driver development in one go. This might be too much at once and become overwhelming pretty quickly but I think being overwhelmed is better than loosing interest because of a lack of exiting experiments.

Another big plus is that the USB connection to the ethernet adapter allows you to port the ASIX driver to any embedded system that has a USB host controller and that you want to connect to ethernet. This will work even if the embedded device has no native ethernet support or even if it has native ethernet support but there is no open source driver for the ethernet controller chip or if you do not understand the open source driver yet.

Anyways, I hope you took something away from reading this article. Thank you for your interest in my page.


Using a SNES Controller with an Arduino on a PC

Inspired by this github repository https://github.com/burks10/Arduino-SNES-Controller and the accompanying YouTube video https://www.youtube.com/watch?v=93oCS9nF_y0, I want to write down the steps I followed to get a SNES controller working on a PC without bying any SNES adapters or permanently and irreversibly modifying the controller.

The idea is that an Arduino is used to adapt the SNES controller to a USB HID Joystick device that the PC can use to control an emulator. For that the Arduino’s USB connector chip MEGA16u2 that takes care of the USB protocol on the Arduino is reprogrammed by flashing custom firmware to it. The solution also is able to flash back the original Arduino USB firmware, so you will not permanently alter your Arduino.

The USB firmware is contributed by the UnoJoy project which has it’s repository under https://github.com/AlanChatham/UnoJoy. UnoJoy supports the PS3 controller out of the box but not the SNES controller. burks10 added a sketch that is able to interpret the SNES commands and insert them into the data structures that the UnoJoy project mandates.

The software architecture is as follows. The USB protocol mandates that the USB controller inside your PC polls USB HID devices for input state. That means that the MEGA16u2 chip on the Arduino is constantly polled for input data. The custom UnoJoy firmware that is flashed to the MEGA16u2 will read a dataForController_t data structure from the sketch that runs on the Arduino. It will return this information back to the USB controller. The sketch on the Arduino will poll the SNES controller and it will interpret the signals as button or dpad presses. It will fill in the dataForController_t with the SNES controller’s button state. This is how a button press makes it from the SNES controller through the MEGA16u2 chip, through USB, through the PC’s USB controller to the emulator.

When the newly flashed Arduino is plugged in into the PC, it will register with the operating system as a HID Joystick. Selecting this HID Joystick in your emulator allows you to read input from this device.

Here are the steps in detailed order:

Hint / Important

Turn off USB Helper
Turn off all tools that might interfere with USB such as USB Overdrive.

Wire Up the SNES controller to the arduino.

Locking at the controller plug, there is a rounded corner and a flat corner.
Putting the flat corner on the left and the rounded corner on the right, the pins are numerate 1 through 7:

| 1 2 3 4 | 5 6 7 )

Connect Pin 1 on the controller to 5V on the Arduino, that means:

Controller Pin 1 <-> Arduino 5V
Controller Pin 2 <-> Arduino Pin 6
Controller Pin 3 <-> Arduino Pin 7
Controller Pin 4 <-> Arduino Pin 12
Controller Pin 7 <-> Arduino GND

Upload the correct sketch to the Arduino

git clone https://github.com/burks10/Arduino-SNES-Controller.git
Open Arduino IDE on the file snes/snes.ino
As a board, select the ardunio uno.
Validate and upload the sketch.

Install the dfu-programmer

see https://www.arduino.cc/en/Hacking/DFUProgramming8U2

sudo port install dfu-programmer

Install libusb and libusb-compat

Turn on DFU mode

Short the 2 pins closest to the USB port to enter DFU mode

Prepare the flashing of new firmware for the Ardunio USB controller chip

git clone https://github.com/AlanChatham/UnoJoy.git
cd UnoJoy/UnoJoy

Edit the file TurnIntoAJoystick.command file

Replace all occurences of ./dfu-programmer by dfu-programmer

Make it executable and run the command file:

chmod a+x TurnIntoAJoystick.command
./TurnIntoAJoystick.command

Connect the Ardunio to the PC

Unplug the arduino and plug it back in

Check if MacOS has detected a controller

MacIcon in the top left > About this Mac > Overview > System Report … > USB > check if there is an entry alled ‘UnoJoy Joystick’

Reverting the process (Getting back the normal Arduino Behaviour)

If you want your arduino back, enter DFU mode again.
While the arduino is plugged in to USB, short the pins shortest to the USB port.

Modify the file TurnIntoAnArduino.command and replace all occurences of ./dfu-programmer by dfu-programmer

Make it executable and run the command file:

chmod a+x TurnIntoAnArduino.command
./TurnIntoAnArduino.command