Load Data from REST API

How to load data from a REST API to then display it inside a component? This is achieved inside the ComponentDidMount for class components and inside a call to useEffect() without parameters in functional components.

useEffect(() => {
console.log('useEffect()');
getRESTData();
console.log('useEffect() done');
}, []);

There is one important thing to know! ComponentDidMount and the empty effect are called after the components are rendered for the first time. In other words, components are rendered with empty data initially!

Make sure to design your components so that they deal with empty data without throwing any errors!

When ComponentDidMount or the empty effect have been executed and the data is finally available, the components are rendered again and this time around do have access to the REST data. Design your components in a way that they deal with data arriving at a later point in time!

Angular Testing

Links

Introductory Rant

What happens when code is untested? At first, nothing major will happen. The first generation of team members knows the Ins- and Outs of the code. They created each part at their preferred speed. Large applications grow over several months or years and the knowledge is stored in the heads of the developers of the first generation. Due to untested code, there will be some bugs but the bugs can be solved by the devs because they know exactly who wrote the code, what causes the bugs and how to fix them quickly. 

Bad things happen months and years later. The price is paid by the second generation of developers. Once people leave for new jobs, the team eventually is cycled out and the second generation takes over. Or maybe the A-Team of developers is put onto another project and the B-Team takes over. Lack of knowledge transfer and documentation leads to a phase of utter chaos. A vast, undocumented, untested code base is dumped onto a team that has no experience in the field whatsoever. Unexperienced people now get the job assigned to reengineer complex interactions in a short amount of time and to quickly implement new working features in a potentially broken code base. I argue that this task is almost as difficult as creating the original project although the difficulties lie not in the engineering part but in the understanding of the existing codebase.

Now nobody knows, what the code is actually supposed to do as there are no constraints described by unit tests on what the code currently does. People do not know if after changing the code, the app still works at the customer’s site because there is no test coverage that checks if parts of the application broke due to unwanted side effects.

People will shy away from changing the application instead, they will leave the company in search for a sane working environment and the app will finally be replaced altogether, when yet another new generation of developers or managers step in.

One part of the solution is to start unit testing as early as possible and to add integration testing with automated tooling support.

Tests in Angular

Angular was designed to be testable when Angular was invented and developed.

In Angular, there are unit tests written with Jasmine and Karma and end-to-end (e2e) tests implemented with Protractor. Both can be executed by the continuous integration tool or on every save during development.

Coming from other programming languages where unit tests also exist, understanding Jasmine Behaviour Driven Tests is not that hard, because the concepts of test suite, a setup and a tear-down step and individual tests within a suite correspond with other languages.

Where it gets hard is when Angular specific parts are mixed into the Jasmine tests. Understanding those Angular specific parts that are involved in an Angular unit tests for components is hard, because these parts simply are not existent in other programming languages.

Testing with Jasmine and Karma

Jasmine is a behaviour driven testing framework for JavaScript. Karma is a test runner for JavaScript. It starts a web server serving the testing code and allows a browser to access the served code. The browser can be controlled by Jasmine using a web driver.

The combination of Jasmine and Karma are used extensively by Angular. Angular adds Angular specifics to the otherwise JavaScript base tools Jasmine and Karma.

Angular Specifics

The Angular specific parts in Jasmine Unit Tests are the ComponentFixture and the TestBed. The TestBed forms the environment for dependency injection by creating a NgModule just for running a test. The ComponentFixture wraps the component instance under test.

TestBed

The TestBed is used to create an Angular module on the fly. That module is only used for the unit test at hand in contrast to modules you use to organize your code. It is used to contain all the services, spies, mocks and all other resources needed to successfully run the unit test. When the unit test ends, that module is removed from memory, it only lives during the execution of the test suite. 

The TestBed will then be used to create the ComponentFixture through a call to it’s createComponent() method. (createComponent() is usually called in beforeEach()).

beforeEach(() => { 
fixture = TestBed.createComponent(ContactEditComponent);
component = fixture.componentInstance;
fixture.detectChanges();
...
}

The ComponentFixture is actually not the instance of the component, which is tested! It is not the system under test. In the snippet above, you can see the line of code:

component = fixture.componentInstance;

The ComponentFixture can be asked for the system under test using the componentInstance property. It will return the instance of the component under test.

It seems as if a ComponentFixture wraps the instance of the Component that is tested.

Here is what is so very confusing to me: The TestBed.createComponent() method, despite being named ‘createComponent’ does not return a component! Instead it returns a ComponentFixture!

Because the ComponentFixture was created from the TestBed, it will use the providers and services that have been configured into the TestingModule which was created in the first step. That means your spies and mocks are now used by the fixture.

The ComponentFixture is used to run changeDetection() manually because in UnitTests, the Angular ChangeDetection system is not running at all. You have to trigger the system manually so all changes are reflected in the DOM before you can query the changes in your assertions.

ComponentFixture

A ComponentFixture is an object, which wraps the instance of the Component under test. The component instance uses the mocks and spies configured into the TestBed it was created by.

In the individual unit tests, that is in the describe() and it() methods, the component is used to call methods on and to check how it’s state changes.

beforeEach(() => { 
fixture = TestBed.createComponent(ContactEditComponent);
component = fixture.componentInstance;
fixture.detectChanges();
...
}


describe('that, when using the FavoriteComponent', () => {
it('should display a star when clicked', fakeAsync(() => {
...
component.click();
...
expect(element.nativeElement.value).toBe('selected');
...
}
}

Angular Data Flow

This post lists the ways you can send data around in a angular application which will be referred to as data flow.

Using interpolation, data in a component’s properties can be output to the HTML template. But then there is also property-, class- and other bindings such as two-way binding (Banana in a Box). Data can be exchanged between child (@ViewChild, @ViewChildren, @ContentChild, @ContentChildren decorators) and parent components. Events can be sent (EventEmitter). Forms can be used to submit data with validation. But why are forms needed in the first place, when we have data binding?

To a beginner all these concepts are confusing, this post lists all interactions and explains their major benefit and when to use them.

Interpolation

The value stored in a component’s property can be output on a template using the interpolation syntax.

import { Component } from '@angular/core';

@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent {
title = 'cockpit';
}
{{title}}

Interpolation also allows you to perform calculations and function calls.

{{10 + 20}}
{{functionCall()}}

The value can be placed anywhere in the template and is rendered as is. If you want to put a value from a component into an attribute of a DOM element or child component, do not use interpolation but use property binding.

Interpolation using getters

If your component contains a getter

import { Component } from '@angular/core';

@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent {
title = 'cockpit';
username = 'username';

get user(): string {
return this.username;
}
}

interpolation also works with the getter:

{{user}}

Getters and Change Detection

As angular uses an elaborate change detection mechanism, the getter should not return a new object each time it is called but instead should return an immutable object which remains identical unless the data actually did change.

For an object to be immutable, the object is never modified but recreated from scratch each time data changes. If the data stays the same, the getter will return the same immutable object instance instead of returning a new string or a new object every time it is called even if the data is the same.

That way, using immutable objects, change detection can compare objects returned by getters correctly an only redraw that part of the page, when a new object is present, which means that data did actually change.

Refrain from constructing new objects on the fly and returning them from a getter as this throws change detection into a loop and causes angular to perform unnecessary work when rendering the page.

Naming fields and their get and set methods

Coming from a Java background, where getters and setters are also available, the expectation is the following syntax:

public class MyClass {
private String fieldname;

public getFiendname() {
return this.fieldname;
}

public setFieldname(const String fieldname) {
this.fieldname = fieldname;
}
}

This means, getters and setters are in essence normal member functions or methods. They have the conventional set and get prefixes to make them immediately identifiable as getters and setters. This convention is also mandated by the definition of a Java bean.

In Angular/Typescript, the keywords get and set exist and they are special syntax for getters and setters. The downside of this explicit notation is that getters and setters basically use the identifier that would be used for the field itself! There is a naming conflict here! How to resolve that conflict? This depends on the conventions you agree upon in your project. There is an underscore convention which is outdated apparently. Then you can choose a different name for the field and use the compact identifier for the getters and setters. Ultimately there is no definite solution to the name conflict issue.

Event Binding – Calling methods in components from the UI

Event Binding uses braces around attributes like this: (attr)=”eventHandlerMethod()”

You can bind to the click event of a button and call a function in your component.

login() {
this.authService.login();
}

logout() {
this.authService.logout();
}

In the template:

<button (click)="login()">Login</button>
<button (click)="logout()">Logout</button>

Another example of EventBinding is to bind a method of a component to the submit event of an Angular template-driven form (https://angular.io/guide/forms).

<form (ngSubmit)="onSubmit()" #heroForm="ngForm"></form>

When the form is submitted using the submit button, the onSubmit() method in the component is called.

@Input – Property Binding – Passing data from a parent to a DOM element or child component

Property Binding uses square brackets like this: [attr]=”value”.

As the template of a component can contain other components, a parent child hierarchy is formed. The parent, the component owning the template can put a value into an attribute of a child component. The child component can make use of the attribute by mapping that attribute to a field in it’s own class.

The mapping is created by annotating a field using the @Input() annotation. Now that field is mapped to an attribute of the same name in the template. To pass a value to the child, the attribute has to be surrounded by square brackets (= the binding operator).  The binding operator takes the value inside the square brackets and tries to find a matching field in the component. It will then set the value to that field. 

Imagine a DetailsComponent that is supposed to display a model object. An example could be a UserDetailsComponent displaying a user domain model.

@Input()
user: User;

The template of the parent component contains:

<user-details [user]="user"></user-details>

In this example, the user variable used as value can be created by a *ngFor directive or can come from somewhere else. It will be supplied by the parent component in most cases.

The NgClass Directive

The NgClass directive can add or remove CSS classes to a DOM element. It expects an object as a parameter. The object contains the intividual CSS classes as properties and boolean values that either add that particular class when the boolean value is true or remove that class when the value is false. Instead of hardcoded booleans, the boolean value can be returned from a method in the component.

<div [ngClass]="{selected: selected(), alarm: alarm()}"></div>

Two-way Data Binding

Two-way Binding uses square brackets containing braces, containing the value like this: [(attr)]=”value”

The banana-in-a-box syntax is used to send data from the frontend to the component and also from the component to the ui.

<input type="text" class="form-control" id="name" required       [(ngModel)]="model.name" name="name">

The code snippet above contains an input field from a template-driven form. The two-way binding is applied to the ngModel directive which is defined in the FormsModule. ngModel connects input from the template to the component.

The assignment to the Two-way data bound ngModel is using model.name as a value. model.name refers to the name field in the model field in the component that houses the template form.

This is a sentence that nobody will understand ever, so let me rephrase it. model.name refers to application defined fields. model is not a special keyword. It use a convention to name the target object that the form input is stored into, model. model.name is an example for a field of the model, which is called name. The two-way binding to model.name will store the user input into the model field of the component and inside the model it will store the input into the name property of the model object. If the object you store your data in is not called model, that is fine too, just specify the correct names in the two-way binding value.

Sending Events from a child to a Parent

Difference between a Content Child and a View Child

Angular ultimately renders a tree of components. A tree data structure defines nodes and their children. Children are nested into their parent nodes. In Angular components are nested into components.

There are two ways to nest components in Angular:

  1. View Children
  2. Content Children

Nesting a child component into a parent component is done by using the child components selector / tag in a template.

The distinction between view and component child is made by where, in which template the child component is used.

If the child component is used directly in the parent’s template, then the nesting is view nesting and the child is a ViewChild.

If the parent component is used in some “third-party” template and child components are used inside the opening and closing tag of the parent in the same arbitrary “third-party” template, then this is referred to as content nesting and the children are content children.

An example for view nesting is a fixed combination of components. A ColorPicker component might have a nested view child that draws the color space into a square or circle for the user to click into to select color in a explorative manner. Lets call this component ColorField. The ColorPicker component might have another nested view that represents the current color using sliders for the color’s red, green, blue and alpha values. Let’s call this component ColorCoordinates. In ColorPicker’s own template, the ColorField and ColorCoordinates components are  used, which by definition makes them view children.

An example for content nesting is a Tab component which is used in the template of a Dashboard component. As content children, the Tab component will have several TabPane components, that the user can switch between. Instead of inserting TabPane components directly into the Tab component’s template as view children, the TabPane components are added as nested tags to the Tab-tag in the Dashboard component’s template. This allows the user of the Dashboard component to add as many TabPanes to the Tab component as they want or need. Applying this idea further, the content on the TabPane components again is added directly in the Dashboard component’s template which makes it another example for content nesting.

A component can have View and Content children at the same time. A component can also have neither View nor Component children at all.

The interesting question is, when a component has ViewChild or Content components, what can it do with those? The answer is that the component class will have references to those children and call call methods on those children components inside it’s own code.

Sometimes, classes annotated with the @Component decorator are refered to as the component’s controllers. Having View- or Content-Children is a way for the parent component’s controller to access the controllers of the component’s children.

Angular Deep Dive

In Angular you define components and their templates in Angular’s syntax. The browser understands JavaScript. How does Angular translate all your components, bindings and templates to typescript and from typescript to JavaScript? This article contains the information I could find.

Links

Ahead-of-time (AOT) compilation
Explanation Video of the Angular Compiler
Angular Code on GitHub

The need for a Compiler

The answer to the question how Angular converts the Angular syntax to JavaScript is that Angular contains it’s own compiler. This compiler converts templates to TypeScript and feeds that TypeScript to a TypeScript compiler for finding type errors. It will then output messages for mistakes you did in writing your templates. This is necessary because Angular templates can contain logic such as referencing variables defined elsewhere, using pipes or using directives (ngIf, ngFor, ngSwitch, ngModel, ngStyle, …). The code generated for type checking templates is never going to be executed in the browser, it is purely for outputting errors to the user!

Also the compiler will generate typescript code for the components you write. This code will actually run inside the browser.

The need for a Runtime

The compiler takes a component definition including the template and after type checking (see above) turns it into a ComponentDefinition. The runtime can execute the ComponentDefinition inside the browser.

The runtime can understand the and execute the ComponentDefinitions. The question is, why is a ComponentDefinition not capable of running by itself as it is converted to JavaScript from TypeScript and JS is runnable in a browser!

The answer why a runtime is required is: 

Organizing JavaScript

Links

https://en.bem.info/methodology/
https://www.webcomponents.org/specs

Prolog

This post is about JavaScript usage on a larger scale. Writing small snippets is one thing and you can get away with almost everything. Writing larger applications becomes a question of scaling. You need architectural patterns to linearily achieve progress when working on large applications.

Scope and Modularity

JavaScript has Scopes. The global scope is the parent scope that always exists. Without any further preparation, variables will be assigned to the global scope. That means all variables from code in script-tags or imported .js files will by default live in the global scope unless local scope is introduced to house variables.

The issue with global scope is that variable names can clash and code ultimately stops working. It prevents good organization and code reuse. Modularity and therefore local scope is needed.

Hoisting with the var Keyword

Hoisting means that variable declarations that make use of the var keyword are moved to the top of the current script (tag or file) or if they are defined within a function, they are used to the top of the current function. Only variable declarations are hoisted, initializations are not moved to the top. That means a hoisted variable can be undefined because the initialization is not moved to the top.

The effect is the same as seen in older versions of C. Variables can only be declared at the top of a function. Hoisting is the automatic process of moving variable declarations to the top of the function or script tag or script file.

Hoisting with the let Keyword

Variables defined with the let keyword are not hoisted all the way to the top of the function or script (tag or file) as is the case with the var keyword but they are hoisted within the nearest scope exclusively. Again, their initialization is not hoisted, so they can be undefined.

The nearest scope is defined by everything that is enclosed in curly braces. That means if-statements, function bodies, loop bodies and even an artificial block defined by an opening and a closing curly brace define local scope.

How to use var, let and const

Code is easy to read if you use const everywhere. If a variable value has to change, use let instead of const. Never use var.

Functions define local Scope

Each function introduces new local scope defined by the function body.

In the very good book Mastering modular JavaScript by Nicolás Bevacqua, the author lists three ways functions (to be precise, the pattern is called IIFE (Immediately-Invoked Function Expression)) have been used a few years ago to define local scope similar to modules:

(function() {
console.log('IIFE using parenthesis')
})()

~function() {
console.log('IIFE using a bitwise operator')
}()

void function() {
console.log('IIFE using the void operator')
}()

Blocks define local Scope

Not only functions define local Scope but blocks do. A function is a special case of a block.

In ES6 this code is possible

{  // open block    
  let tmp = ···;
···
} // close block

The let keyword creates a variable in the local scope (as opposed to the var keyword which  creates a variable that is hoisted to the top of the function which potentially changes the scope it is defined in.

Modules define local Scope

ECMAScript 2015 (ES6) introduced modules (ECMAScript Modules (ESM)) as part of the JavaScript language. In node (which uses CommonJS by default, the ESM system is available only when specifying –experimental-modules and using the .mjs extension for modules)

Before ES6, custom libraries (CommonJS, RequireJS) provided module functionality for JavaScript programmers. Those custom libraries are still used extensively today. Probably because it is a massive undertaking to refactor all existing code to ES6.

So now there is a mix of CommonJS, RequireJS and ES6 modules used in the wild. The syntax for ES6 modules (export, import) differs from the CommonJS and RequireJS syntax (use of the exports-object, require).

While CommonJS is the dominant module system in node, RequireJS is more geared towards browsers. RequireJS implements the Asynchronous Module Definition (AMD) standard. Typescript adopted the ES6 module syntax from the start.

Later Browserify allowed to bundle node bundles into a browser-ready format and allowed the use the node package manager and all it’s modules in the development of web applications that run in a browser. Today webpack is the most widely used bundler and mostly took over Browserify.

Across all possible module systems, the common parts are that every file is a module and a module cannot be spread across multiple files. Every module has it’s own scope and context.

Object Orientated Programming (OOP)

To me, object orientation is about combining state and functions that allow you to access and manipulate that state in one place that you can easily find even in large applications.

Even after weeks of not working on a project, it is clear that a persons name and address are stored in the Person class. It just makes sense and is intuitive.

In Java and C++ for example, first a Class has to be defined. The class definition controls which member variables and which function a class has. At runtime variables (aka. objects or instances) are created from the class. No member variables or functions can be added or removed.

JavaScript is different in many ways. An object can be created without a class definition. At runtime, member variables and functions can be added and removed from objects.

OOP using Object Initializers

let personInstance = {
name: 'person1',
age: 30
}

// location A

personInstance.isBlocked = true;

// location B

delete personInstance.isBlocked;

// location C

In the example above, an object is defined (using JavaScript Object Notation (JSON)) without a class and without a constructor function (see below)! Instead a so called object initializer is used, see https://developer.mozilla.org/de/docs/Web/JavaScript/Guide/Working_with_Objects

The object initializer is a block of JSON that defines the object, it’s member variables and it’s functions. For alternatives to object inititalizers look at constructor functions and classes!

In the rest of the script, a member variable ‘isBlocked’ is added and initialized and then removed again. console.logging personInstance.isBlocked at location A yields ‘undefined’ as the isBlocked member is not yet part of the object. At location B, logging will yield true as expected. At location C, logging will again yield ‘undefined’ as the member was removed.

let app = {

settings: {
container: document.querySelector('.calendar'),
calendar: document.querySelector('.front'),
days: document.querySelectorAll('.weeks span'),
form: document.querySelector('.back'),
input: document.querySelector('.back input'),
buttons: document.querySelector('.back button')
},

init: function() {
console.log('container: ', this.settings.container);
console.log('calendar: ', this.settings.calendar);
console.log('days: ', this.settings.days);
console.log('form: ', this.settings.form);
console.log('input: ', this.settings.input);
console.log('buttons: ', this.settings.buttons);
},
}

app.init();

The code above combines data (settings) and functions (init()) into an object (app). Then it calls a method on the app instance. The call will output the state stored in that instance.

There is a shorthand notation to add a function to an object. Instead of using the key value pair notatino ( functionName: function { … } ) you can use functionName() { … }. In the example above init() { … } instead of init: function { … }

The keyword: this

In JavaScript, this used inside a function, refers to the object that called the function. If the function is a member function of an object, that behaviour is not changed. In JavaScript, the this keyword has no relation to the objects instead it has a relation to the caller.

In Java and C++, this used inside a member function refers to the object instance. Here, the this keyword has no relation to the caller!

In DOM event handlers such as click handlers, this refers to the object that emitted the event.

When a function is defined in global scope and the script is executed inside a browser, this refers to the window object. When strict mode is enabled in addition, the this keyword in global scope is undefined and does not point to the window object!

With arrow functions, the behaviour of the this keyword is different than for normal functions. So arrow functions are not syntactic sugar but they have their own characteristics. Arrow functions do not have an own this pointer! Because they have no own this pointer, when calling an arrow function, they do not bind the this pointer to the context of the call! That in turn means that arrow functions do not shadow a this pointer that might exist before calling the arrow function. Because you can still use the this keyword inside an arrow function, the question remains, what the this pointer refers to inside an arrow function! This good news for OOP is that this inside an arrow function defined inside an object has a relation to the object that the arrow function is defined inside! this in an arrow function allows you to access the member variables and functions of the object instance!

When the this keyword is used inside a constructor function, this points to the newly created object instance.

Constructor function is

https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Functions/Arrow_functions

CONSTRUCTOR FUNCTIONS

Instead of defining objects using the JSON notation, there is another way called constructor functions.

function Person(first, last, age, eye) {
  this.firstName = first;
  this.lastName = last;
  this.age = age;
  this.eyeColor = eye;
this.name = function() {
    return this.firstName + " " + this.lastName;
  };
}

const myFather = new Person("John", "Doe", 50, "blue");
const myMother = new Person("Sally", "Rally", 48, "green");

console.log(myFather.name());

The example is taken from here: https://www.w3schools.com/js/js_object_constructors.asp

A function called Person is defined and later used in conjunction with the new keyword to arrive at instance variables myFather, myMother.

The Person() function is referred to as the constructor function. Inside the constructor function, the this keyword actually points to the instance that is just being created. 

OOP with Prototypes

Funtions can be added to the prototype and can then be called later.

OOP With Classes

The class keyword in javascript is syntactic sugar for JavaScript’s prototype system. That means the compiler or interpreter transforms the keywords into other JS features so the programmer is freed from the task.

Classes where introduced in ES6 (ECMAScript 2015)

Epilog

Especially with languages, I am no proficient in, I personally find myself in a situation where my progress keeps getting slower and slower over a day of developing software, just because I keep battling the language and how to organize the code when the application gets bigger. It gets slower until I get to a complete stop and I have to give up for the day. The next day there is even less progress. It becomes similar to wading through a swamp and you are finally so tired that you give up and the swamp swallows you. 

With languages I am proficient in, I find that I am not blocked by the language itself. I am blocked by medium to hard problems I have to solve but the programming language is at tool that makes it easier to solve the problems rather than slowing me down.

When you find yourself in a situation where the language is slowing you down, you have to realize that your programming skills in that language are lacking and you have to go back to school.

This article showed ways to use JavaScript which are applicable to larger problems

webpack

webpack is a build system for JavaScript which requires node to run. It refers to itself as a static module bundler. It views your source files as modules and organizes modules and their dependencies inside a graph. It will output one or more bundles after traversing the graph. So modules in a dependency graph in, bundles out.

webpack uses one entry point which is similar to a main() function in a programming language as it marks the starting point of operation. The entry point, entry for short, is the module where webpack starts to traverse the dependency graph.

Loaders do Load Modules

In webpack, you import modules to build up the dependency graph. A module can be any file, as long as there is a loader for that type of file. By default, webpack understands JavaScript and JSON files and it can convert those into modules and add them to the dependency graphs via import statements.

Additional loaders allow webpack to understand other types of files, convert them into modules and add them into the dependency graph.

When webpack sees an import, it looks into it’s definitions of loaders and if it finds a matching loader, it applies that loader to the import. The module rules for loader definitions are contained in the webpack.config.js.

const path = require('path');

module.exports = {
  output: {
    filename: 'my-first-webpack.bundle.js',
  },
  module: {
    rules: [{ test: /\.txt$/, use: 'raw-loader' }],
  },
};

In the rule above, test defines which import regular expression the loader will match and the use-part defines the loader implementation to use when the regex matches. In this example, the raw-loader will be applied to all imported .txt files.

Ultimately, your resulting bundle or bundles will contain all the JavaScript, HTML, CSS, images and other files that you import as modules. That’s right, you treat CSS files and everything you need as a module when using webpack. You will in fact import CSS files! Pretty exciting concept if you ask me!

The Entry

webpack uses a configuration file webpack.config.js. Here you specify the entry:

module.exports = {  
entry: './path/to/my/entry/file.js',
};

The entry will be a file called ./src/index.js in most cases.

Plugins

Plugins are added via the webpack.config.js file. They are then called by the webpack compiler during compilation.

A example configuration is

const HtmlWebpackPlugin = require('html-webpack-plugin'); //installed via npm
const webpack = require('webpack'); //to access built-in plugins
const path = require('path');

module.exports = {
  entry: './path/to/my/entry/file.js',
  output: {
    filename: 'my-first-webpack.bundle.js',
    path: path.resolve(__dirname, 'dist'),
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        use: 'babel-loader',
      },
    ],
  },
  plugins: [
    new webpack.ProgressPlugin(),
    new HtmlWebpackPlugin({ template: './src/index.html' }),
  ],
};

here, the HTMLWebpackPlugin is used.

Example

First, create a node project.

cd dev/javascript
mkdir webpack_helloworld
cd webpack_helloworld
npm init -y
code .

Then install the dependencies.

npm install --save-dev webpack webpack-cli
npm install --save-dev html-webpack-plugin

Setup the files and folders.

insert a webpack.config.js next to the package.json.

const HtmlWebpackPlugin = require('html-webpack-plugin');

const path = require('path');module.exports = {
mode: 'development',
entry: './index.js',
output: {
path: path.resolve(__dirname, './dist'),
filename: 'index_bundle.js',
},
plugins: [new HtmlWebpackPlugin()],
};

This webpack.config.js requires a index.js file, so you have to create one next to the webpack.config.js. Inside the index.js file, just output some text.

console.log('webpack works! - Hello World!');
alert('webpack works! - Hello World!');

To start the webpack build, add a script in the package.json

"scripts": {    
"build": "webpack --config webpack.config.js",
"test": "echo \"Error: no test specified\" && exit 1"
}

Start the webpack compilation

npm run build

Now check your project folder. There is a dist folder generated for you. Inside that dist folder, the HtmlWebpackPlugin has generated a index.html file that imports a generated index_bundle.js. The index_bundle.js file contains all entry points and their dependencies defined in webpack.config.js. That means it will contain the code from index.js in this example.

Revisiting the Result

Now, webpack created a dist/index.html file for us and it bundle all entry modules (currently javascript files) and all their dependencies and imported them automatically into the dist/index.html file.

This is a wonderful situation for building a framework that generates all HTML markup programmatically. You could build all HTML via JavaScript’s DOM API from your javascript entry point. This is not necessarily what you want. If you want to use webpack and it’s hot reload feature to work with CSS and HTML, then you most likely want a HTML file you have full control over.

Using HTML with webpack

The question is, how do you add your own html markup into the generated index.html? The answer to this question is the HtmlWebpackPlugin’s template feature. Credits go to this solution: https://stackoverflow.com/questions/39798095/multiple-html-files-using-webpack/63385300 and the documentation https://github.com/jantimon/html-webpack-plugin/blob/main/docs/template-option.md

A template is a HTML file that you put into a src folder and that you configure in the HtmlWebpackPlugin in webpack.config.js:

const HtmlWebpackPlugin = require('html-webpack-plugin');
const path = require('path');

module.exports = {
mode: 'development',
entry: './index.js',
output: {
path: path.resolve(__dirname, './dist'),
filename: 'index_bundle.js',
},
plugins: [new HtmlWebpackPlugin({
filename: 'index.html',
template: 'src/index.html',
chunks: ['main']
})],
};

A valid template (src/index.html) looks like this:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Webpack App 2</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
</head>
<body>
<h1>Hewlo Wurl!</h1>
</body>
</html>

Webpack will first copy this file into the dist folder and then webpack will modify the copied file. Webpack will insert a script-tag into the header and that way it will import the compiled modules (bundles) into the html file copied from the template.

To try this out, build again (npm run build) and reopen the dist/index.html in your browser. You should first get an alert box which is contained in the entry module index.js (if you followed this example) and then you should see the custom HTML from the index template! Wonderfull, we are almost there!

The next question is, how to import CSS.

Using CSS with webpack

Let’s say you want to use ./src/index.css

h1 {
color: red;
}

First install the webpack css-loader and the style-loader

npm install --save-dev css-loader
npm install --save-dev style-loader

Next add a rule into webpack.config.js that makes webpack apply the css plugin to css files:

const HtmlWebpackPlugin = require("html-webpack-plugin");
const path = require("path");

module.exports = {
mode: "development",
entry: "./index.js",
output: {
path: path.resolve(__dirname, "./dist"),
filename: "index_bundle.js",
},
plugins: [
new HtmlWebpackPlugin({
filename: "index.html",
template: "src/index.html",
chunks: ["main"],
}),
],
module: {
rules: [
{
test: /\.css$/i,
use: ["style-loader", "css-loader"],
},
],
},
};

No this is where it gets a little weird, at least to my liking. You will use an import statement to import your css file into the entry. The import will make webpack apply the css-loader and style-loader to the css file.

import css from "./src/index.css";

console.log('webpack works!');
alert('test');

Now rebuild and reload the generated index.html. The header should be displayed in red color.

Using JSON files with WebPack

Say you have a JSON file on your hard drive that contains JSON data you need process in some JavaScript routines. You can import the JSON file using a Promise and process the data once the Promise is successfull.

import(
'./data/testdata.json'
).then(({default: testdata}) => {

let jsonOutputElement = document.getElementById("rawJson");
jsonOutputElement.innerHTML = JSON.stringify(testdata, undefined, 2);

// do whatever you like with your "jsonMenu" variable
//console.log('testdata: ', testdata);
initTreeStructure(testdata);
});

For webpack to load the JSON file, it uses a JSON-Loader, that not only reads the file to a string but it parses the JSON to a JavaScript object.

In earlier versions of webpack, a json loader had to be installed manually and added to the list of loaders in webpack.config.js

npm install json-loader --save-dev
module.exports = {
module: {
loaders: [
{
test: /.json$/,
loader: 'json-loader'
}
]
}
}

It seems that manually adding a JSON-loader in newer versions of webpack actually causes issues because the added JSON-loader conflicts with the onboard JSON-loader leading to parse errors during JSON parsing! I found that in the current version of webpack, it is sufficient to just import JSON files without installing and configuring any JSON-loader.

Using HTML components with WebPack

https://medium.com/hackernoon/using-html-components-with-webpack-f383797a5ca

Hot Module Replacement aka. Hot Reload

webpack can be instructed to watch your files for changes, compile and reload the page in the browser for you. That way the latest changes are available on save.

To enable hot reload, first install webpack-dev-server

npm install webpack-dev-server --save-dev

Now, edit package.json and add a serve script:

"scripts": {
"build": "webpack --config webpack.config.js",
"serve": "webpack serve",
"test": "echo \"Error: no test specified\" && exit 1"
},

Run the serve script

npm run serve

This will bring up a webpack development server with hot load capabilities. In the console, a URL is printed. You have to open that URL using a browser to get the hot loaded page. Do not just open the html page in your dist folder, this file will not be hotloaded. Your browser has to access the webpage from the hot loaded server!

Test it out, change your css or your JavaScript. The browser is instructed to reload the page and your changes are immediately reflected in the browser.

Sequelize

Introduction

Sequelize describes itself on the sequelize homepage:

Sequelize is a promise-based Node.js ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. It features solid transaction support, relations, eager and lazy loading, read replication and more.

Sequelize is an ORM (Object Relational Mapper) which allows you to interface with a SQL database without writing SQL but purely using the domain objects of your node application.

Sequelize shields you of any SQL to the point where Sequelize will automatically generate and execute table creation statements and all the insert, update, select and delete statements for you! You do not have to create tables in your database server, Sequelize will do it for you.

While reading this article, you can look at this repository. It is not good code by any means but maybe it helps you to see a running example which you can modify and do your own tests on.

Defining Models

A model is an object that has attributes which will be stored in the columns of a database table. Models can be connected to each other using associations.

An example would be an application that manages bank accounts. There is a model for an account, each account has amounts of money in them and money is transferred between accounts by transactions. You could model Accounts, Amounts and Transactions. Sequelize will allow you to perform CRUD for those objects without writing a single line of SQL.

The model folder and index.js

In order for Sequelize to manage your objects, you have to define the models first so Sequelize knows about their structure and can generate SQL for you.

This approach is taken from https://github.com/sequelize/express-example/blob/master/models and it works very well.

One way of managing your models is to have a separate folder to store all your model definitions in. A model definition is a node module that exports a model. A model optionally has a function that defines it’s associations to the other models.

Inside that model folder, you will also have a index.js file. This index.js file will first scan for all the models defined in the models folder, set them up and then export an object called db. db will be the object your node application uses to interface with sequelize. db contains a connection to the database and all the model definitions that you can store in the database.

index.js will perform the following steps to set up models:

  • It will connect to the database and store the connection into the db object
  • It will scan the model folder and load all model definitions it finds
  • It will call the associate() function on model (if defined) so that a model can define it’s association to the other models
TypeError: defineCall is not a function

One very import thing to remember is the following: The error “TypeError: defineCall is not a function” is thrown if your model folder does contain any files except valid Sequelize Model-Modules or the index.js file! If you put any non-Sequelize code into the model folder or if you even comment out the entire contents one of your Model-Module files, Sequelize will get confused and throw the “defineCall” error! So do not comment out any of your Model-Modules and do not put any other files into the model folder!

An example Model-Module (models/account.model.js) for accounts is:

module.exports = (sequelize, DataTypes) => {

    var Account = sequelize.define('Account', {
        'id': {
            type: DataTypes.INTEGER(11),
            allowNull: false,
            primaryKey: true,
            autoIncrement: true
        },
        'name': {
            type: DataTypes.STRING(255)
        }
    });

    Account.associate = function (models) {

        // optional, foreign key is stored in the source model 
// (= Account has foreign key to RealAccount) models.Account.belongsTo(models.RealAccount, { foreignKey: 'realaccountid', targetKey: 'id' }); models.Account.hasMany(models.Amount, { foreignKey: 'virtualaccountid', targetKey: 'id' }) }; return Account; };

This module defines a RealAccount model and it’s associations to a Account Model and several Amount models.

The index.js file looks like this:

// inspired by https://github.com/sequelize/express-example/blob/master/models

var fs = require('fs');
var path = require('path');
var Sequelize = require('sequelize');

var basename = path.basename(__filename);

const sqlConfig = {
    user: 'root',
    password: 'test',
    server: '127.0.0.1:3306',
    database: 'cash'
}

var sequelizeConnection = new Sequelize(sqlConfig.database, sqlConfig.user, sqlConfig.password, {
    host: 'localhost',
    port: 3306,
    dialect: 'mysql',
    logging: false,
    pool: {
        max: 5,
        min: 0,
        idle: 10000
    }
});

var db = {
    Sequelize: Sequelize,
    sequelize: sequelizeConnection
};

// collect all model files in the models folder to automatically load all the defined models
fs.readdirSync(__dirname)
    .filter(file => {
        return (file.indexOf('.') !== 0) && (file !== basename) && (file.slice(-3) === '.js');
    })
    .forEach(file => {
        var model = db.sequelize['import'](path.join(__dirname, file));
        db[model.name] = model;
    });

// if a model has an associtate methode, call it.
// The associate method will define the relationships between the models.
Object.keys(db).forEach(modelName => {
    if (db[modelName].associate) {
        db[modelName].associate(db);
    }
});

module.exports = db;

 

Using the db object

To store and load into the database, you have to use the db object. Import the db object:

var db = require('../model/index.js');

Now the db object is ready.

For Testing: Erase and recreate all the tables for all models

Note: Never have this in your production code! All data will be lost! This is usefull for testing!

During the testing phase, it is usefull to let Sequelize erase all tables and recreate them for you. To do so execute sync() with the force parameter:

// delete the entire database
await db.sequelize.sync({
    force: true
});
Inserting a Model

The functions that operate on models are added to a mode_services folder into the service.js file.

It is a mistake to insert the services.js file into the model folder as Sequelize will get confused and produce the “TypeError: defineCall is not a function” error.

The module to insert a model looks like this:

async function addAmountByIds(db, realAccountId, virtualAccountId, amount) {
    return db.Amount.create({
        amount: amount,
        realaccountid: realAccountId,
        virtualaccountid: virtualAccountId
    });
}

module.exports = {

    addAmountByIds: addAmountByIds

};

To use this method:

var services = require('../persistence_services/services');
var db = require('../persistence/index.js');

var realAccount = await services.addAmountByIds(db, 1, 2, 299);

You can see that the function addAmountByIds() is defined async because Sequelize is inherently async and uses Promises for everything.

addAmountByIds() will not return the created Amount object instantly but it will return a Promise. The Promise is your handle to an asynchronous flow of operation which is creating the Amount object in the database asynchronosly. As soon as that flow finishes, the Promise will return the actual result which is the created amount object.

Declaring addAmountByIds() async makes it a non-blocking call which means that when you call it, it immediately returns a Promise and your normal program flow continues while a parallel thread starts in the background.

This way of dealing with asynchronicity is awesome but my brain just can’t handle it. I cannot write a working application dealing with hundreds of Promises instead of real objects. Because of my own imperfection, the examples will always call the service functions with the await keyword.

The await keyword turns the non-blocking calls into plain old blocking functions. The program flow will block after calling the service functions until the real object is returned from the promise. That way, you are sure that the result returned is a valid object that is no persisted into your database. You can now write sequential code and sole your problems without thinking about asynchronous code.

then() – chaining

An alternative way of dealing with asynchronous Promises is to chain Promises together with calls to the then() function.

The callback specified in the then() function is called as soon as the Promise returns the real object. Besides await, this is another way to block until the result is ready. then() chains are a very akward way of writing sequential code in my humble opinion. I maybe have never seen code that is high-quality and easily readable using then() chains, but I cannot imagine easy to read code using then() changes as of now.

Accessing Associations via Properties

If you have objects associated to each other such as a BlogPost and all Comments people left on that BlogPost, you can access all Comments if you have a reference to the BlogPost object just by calling a property on the object.

This means, you do not have to explicitly query for the Comments instead Sequelize will lazy load the Comments for you in the background.

var comments = await blogPost.getComments();

Again, the Promise is resolved by the await keyword and the comments variable contains the loaded comments.

Associatons

How do you connect a BlogPost to a Comment for example?

Remember that index.js will call the associate() function of each model during initialization? The associate() function is where you define the associations for each model.

You should consult the Sequelize documentation to find the right type of association that describe the situation you want to model best.

As an example, if a BlogPost should be associated to many Comments, define:

BlogPost.associate = function (models) {

    models.BlogPost.hasMany(models.Comment, {
        foreignKey: 'blogpostid',
        targetKey: 'id'
    })

};
Not creating Objects in Associations

I have not fully understood why but calling Setters will automatically insert new objects into the database unless you use the save: false parameters:

transaction.setSource(targetAccountObject, { save: false });
Updating Objects

If you want to change the amount in an account, you have to change the value and trigger an update:

amount.amount += 299;
await amount.save().then(() => { });

Here, the amount is increased by 299 and an await is used to wait for the result of the save() call which updates the object in the database.

Another way is to use update():

amountObject.update({
    amount: newAmount
}).then(() => { });
Querying Objects

The findByPk() convenience method allows you to query by primary key.

var result = await db.RealAccount.findByPk(amountObject.realaccountid);

General queries are executed using findAll() and a where clause:

var result = await db.Amount.findAll({
    where: {
        virtualaccountid: targetAccountObject.id
    }
});

 

Creating an Angular Application

Generating the Application using ng new

Angular uses the ng tool to generate all sorts of code for you. In fact ng is used to generate all of the angular items from components to the entire application.

First install the current long term support version of npm:

nvm install --lts
nvm use node --lts

Alternatively use the latest release:

$ nvm ls
v8.11.2
        v8.12.0
         v9.3.0
       v10.14.2
       v10.15.3
       v10.16.0
        v11.4.0
        v12.0.0
        v12.2.0
       v12.13.1
       v13.11.0
       v14.17.0
->     v14.17.3
        v16.1.0
         system

Now install and use the lastest version:

$ nvm install v16.1.0
$ nvm use v16.1.0

ng is added to your system by installing the Angular cli globally using the node package manager npm:

npm install -g @angular/cli

You can check the angular CLI version:

ng version

At the time of this writing, the version Angular CLI: 12.1.3 is current.

The global ng installation is used to generate a new application:

ng new <APPLICATION_NAME> --style=scss --routing

This will create a folder called <APPLICATION_NAME> in the current working folder containing the new project. It will use the sass processor for CSS stylesheets using the scss syntax. It will automatically use routing.

You can also go through a interactive process where the angular CLI asks you about all options before creating the project.

ng new <APPLICATION_NAME> --interactive

Once a project was generated using the global version of the angular CLI, you change into the project folder and from there on out, the local version of the angular CLI as specified in the package.json is used. This means even when the global Angular CLI is updated to a newer version, your application will not break because it locally uses the version as specified in the package.json file.

This helps project stability as you can update the global angular CLI version for new projects and keep the local angular CLI version to prevent the angular application from breaking due to version differences.

Additional Dependencies

For the NgRx store architecture:

npm install @ngrx/store --save
npm install @ngrx/effects --save

Starting the Application

You can open the application folder using Visual Studio Code. Withing Visual Studio Code, open a terminal using the Terminal menu item in the menu bar and the New Terminal Item within it.

To start the application, type npm start which will internally call ng serve.

npm run start

You can also call ng serve directly

ng serve

The application is running at http://localhost:4200/

Adding a Module

Angular is particularly useful because it has a rigid scheme for organizing code which benefits structure and ultimately the quality your application code will have in the long run.

That scheme consists of the use of Typescript Modules which contain components.

There are two types of modules: modules and feature modules.

Initially, ng generates a module which is the app module containing the app component, which is used as a starting point for the application (called bootstrapping).

Starting from the main property inside the angular.json file, a main.ts typescript file is configured. WebPack will execute this main.ts / main.js file when starting the application.

Inside main.ts, a call to bootstrapModule() is made and the app module (AppModule) is specified as a parameter.

Looking into app/app.module.ts, you can see the bootstrap property of the @NgModule decorator. It contains the AppComponent. That means, when angular starts the app module, it will initialize the AppComponent first and use it as an entry point into the application.

Feature modules can be added to the application to extent the application with new functionality. For each separate feature, you create a new feature module so that the feature modules stay independent from each other (strong cohesion, weak coupling). A module will contain all components, services, interceptors, decorators, pipes, model classes and everything else to make a feature work as a independent unit.

Move common functionality such as utilities into their own modules to reuse them from several modules.

The angular CLI allows you to add a module:

ng generate module <MODULE_NAME>

To generate a module that houses all user-related code, execute

ng generate module user

Note that the name was choosen to be user and not user-module or anything. Angular CLI will automatically generate a user folder and a user.module.ts file. The CLI will postfix the module identifier to the generated files for you!

Lazy Loaded Feature Modules

Documentation is: https://angular.io/guide/lazy-loading-ngmodules

A word of advice: lazy loaded feature modules have been updated in newer versions of angular. This article shows console output from version 12. If your output differs, consider migrating to the newest angular version. 

When you want a lazy loaded module, do not import the module into the app module, instead use the router. The idea behind a lazy loaded module is to only load it into memory, when the router navigates to the module.

Therefore the router is used to load the module when a certain route is visited. To setup the lazy loading, update the file app-routing.module.ts

const routes: Routes = [
{
path: 'projects',
loadChildren: () =>
import('./projects/projects.module').then((mod) => mod.ProjectsModule),
},
];

Here, once the path ‘projects’ is followed, the router will execute the import() function which loads the lazy loaded projects module in this case.

The question is, which component will be rendered when the path is visited? The code in the app-routing.module.ts file does not specify a component to load within the projects module! The projects module itself will again contain a routing configuration file called projects-routing.module.ts which specifies all the routes and components.

The file looks like this:

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

import { ProjectContainerComponent } from './components/project-container/project-container.component';

const routes: Routes = [
{
path: '',
component: ProjectContainerComponent,
},
];

@NgModule({
imports: [RouterModule.forChild(routes)],
exports: [RouterModule],
})
export class ProjectsRoutingModule {}

One last change is necessary, in the lazy loaded feature module, import the ProjectsRoutingModule from the projects-routing.module.ts file and add it to the imports of the FeatureModule so it partakes in the routing:

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { ProjectContainerComponent } from './components/project-container/project-container.component';

import { ProjectsRoutingModule } from './projects-routing.module';

@NgModule({
declarations: [
ProjectContainerComponent
],
imports: [
CommonModule, ProjectsRoutingModule
]
})
export class ProjectsModule { }

When you start your app, you should see a listing of Chunks. Chunks are the files containing the application code, that are ultimately downloaded to the client to run your app. You should see your lazy loaded module listed as a Lazy Chunk File as opposed to the Initital Chunk File option for eagerly loaded modules.

Initial Chunk Files                    | Names         |      Size
vendor.js                              | vendor        |   2.39 MB
polyfills.js                           | polyfills     | 128.55 kB
runtime.js                             | runtime       |  12.51 kB
main.js                                | main          |   9.50 kB
styles.css                             | styles        | 118 bytes

                                       | Initial Total |   2.54 MB

Lazy Chunk Files                       | Names         |      Size
src_app_projects_projects_module_ts.js | -             |   5.81 kB

Adding a Service

To add a service into a module, you can use the Angular CLI.

ng generate module auth
cd src/app/module
ng generate service services/auth

This will create the auth.service.ts file inside the auth module.

import { Injectable } from '@angular/core';

@Injectable({
providedIn: 'root'
})
export class AuthService {

constructor() { }
}

@Injectable providedIn: root means that, the provider for this service is added to the root provider. This is documented here: https://angular.io/guide/providers

A provider is responsible to retrieve an instance for a dependency used in dependency injection.

Building an Angular Example App

We will build an application called ‘cash’ that helps you organize your finances. The code is available on github.

The cash application manages accounts.

Real Accounts mirror an existing account that you own within your bank institution.

Virtual Accounts are accounts that you can create within cash and which do not exist in your bank institution.

Virtual Accounts can have zero or one real accounts connected to them.
That means virtual accounts can exist, that are not backed by a real account. A real account can be connected to at most one virtual account.
That means it is not possible to connect a real account to two virtual accounts.

If a virtual account is connected to a real account, the real account is hidden by the virtual account. The amount of money that is stored in the real account is now accesseable only via it’s virtual account.

The way the cash application is structured is that whenever you add a real account to the cash application, cash will automatically create a virtual account for you and connect it to the real account.

You basically only are working with virtual accounts when you use the cash application. The layer of virtual accounts hides the layer of real accounts beneath it. You only interface with virtual accounts.

You can create any number of virtual accounts. For example if you want to structure your income and if you want to save up some money for a new PC or anything, you can create a virtual account for your savings and transfer money in between virtual accounts onto your savings account.

That way your income is subtracted by the amount of money you want to save and you can clearly see how much money there is left to spend after putting away the saved amount of money.

Money is transferred between accounts in transactions. A transaction starts at at most one virtual account and ends at at most another virtual account. A transaction transfers a non-negative amount of money and has a date as well as a description. Transaction between the same virtual accounts are not allowed.

For a transaction, there are the following cases:
A) Source Virtual Account (SVA) and Target Virtual Account (TVA) both are not backed by real accounts.
B) SVA is backed by a real account but TVA is not.
C) SVA is not backed by a real account but TVA is.
D) SVA and TVA are both backed by real accounts.

There are special cases:

Incoming amounts of money are transactions without a source account.
Expenses are transactions that have no target.

E) There is no SVA. The transaction adds income to your bank accounts.
F) There is no TVA. The transaction denotes an expense you made.

If SVA and TVA are both backed by real accounts (case D), then the money of the transaction is also transferred between the real accounts.

If SVA and TVA are both not backed (case A) the real accounts are not altered.

If there is a sequence of transactions of cases B – A* – C, then the money that made it from the real source account to the real target account is also transferred between the real accounts.

B – A* – C means that the sequence starts with a transaction of type B, then there are arbitrary many transactions of type A, then the sequence ends with a transaction of type C. That means, money travels over several hops over virtual accounts and in a global perspective between real accounts.

The amount of money in an account over a time frame can be displayed as a graph.

Transactions are stored in a transaction log which you can open and inspect at any time.

After every transaction, ‘cash’ will compute the amount of money in all real accounts and the money in all virtual accounts. The two amounts of money have to be the same at all times.

Let’s implement this as an angular application. It might be a bit ambitious but it is a usefull example that you can really make use of in your real life.

CRUD for Accounts

The first step will be CRUD for accounts. CRUD stands for Create, Retrieve, Update and Delete. I will use the acronym for an application that provides a user interface and backend code that allows the user to manage items (Create, Retrieve, Update and Delete). The user interface will consist of forms that allow the user to input data for accounts or edit existing accounts. The backend code will persist the accounts to a SQL datastore using Express and the excellent Sequelize SQL-ORM mapper.

First a SQL schema is created to manage real and virtual accounts. Then an API is created for the account CRUD operations in the backend. Once the backend and data storage are functional, a Form is needed to add new accounts. A list is needed to display existing accounts for editing, inspection and deletion.

Accessing Virtual Account via the API

First, lets create a service that covers the API calls. Creating services is done via the angular cli.

ng generate service <service_name>
ng generate service Account
ng generate service AccountDetails

If you want an AccountService, you have to generate with the name ‘Account’. ng will add the name service for you.

Angular Template-Driven Forms

There are two types of forms in angular
1. reactive (or model-driven) forms
2. template-driven forms

This article is a short reminder on how to find information and on how to work with template-driven forms.

Documentation
The official angular documentation is https://angular.io/guide/forms

Prepare node
Install the latest node Long Term Support (LTS) with nvm.
nvm install --lts

Use the latest version
nvm use node --lts

Start the app
npm start

Create a test application called angular-forms
ng new angular-forms

Generate the data object that is submitted by the form
ng generate class Hero

Create a form component
An Angular form has two parts:
1. an HTML-based template
2. A component class to handle data and user interactions programmatically.

Generate the form component
ng generate component HeroForm

Update the form component’s html

<div class="container">
<h1>Hero Form</h1>
<form (ngSubmit)="onSubmit()" #heroForm="ngForm">

{{diagnostic}}

<div class="form-group">
<label for="name">Name</label>
<input type="text" class="form-control" id="name" [(ngModel)]="model.name" name="name" required #spy>
</div>
TODO: remove this: {{spy.className}}

<div class="form-group">
<label for="alterEgo">Alter Ego</label>
<input type="text" class="form-control" id="alterEgo" [(ngModel)]="model.alterEgo" name="alterEgo">
</div>

<div class="form-group">
<label for="power">Hero Power</label>
<select class="form-control" id="power" [(ngModel)]="model.power" name="power" required>
<option *ngFor="let pow of powers" [value]="pow">{{pow}}</option>
</select>
</div>

<button type="submit" class="btn btn-success" [disabled]="!heroForm.form.valid">Submit</button>

</form>
</div>

 

Update the app.module.ts

import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';

import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { HeroFormComponent } from './hero-form/hero-form.component';

@NgModule({
declarations: [
AppComponent,
HeroFormComponent
],
imports: [
BrowserModule,
AppRoutingModule,
FormsModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }

 

Update hero-form.component.ts

import { Component } from '@angular/core';

import { Hero } from '../hero';

@Component({
selector: 'app-hero-form',
templateUrl: './hero-form.component.html',
styleUrls: ['./hero-form.component.css']
})
export class HeroFormComponent {

powers = ['Really Smart', 'Super Flexible',
'Super Hot', 'Weather Changer'];

model = new Hero(18, 'Dr IQ 3000', this.powers[0], 'Chuck OverUnderStreet');

submitted = false;

onSubmit() {
console.log('Submit clicked');
console.log(JSON.stringify(this.model));

this.submitted = true;
}

// TODO: Remove this when we're done
get diagnostic() { return JSON.stringify(this.model); }
}

OnSubmit()
When the submit button is clicked, onSubmit() is called in the form component. To persist, you can create a service and send the object to the backend in json form. The backend then persists the object.

With SpringBoot, you would add a JerseyResource for the endpoint, a JPA repository for the model item and a facade and a service to save the model via the JPA repository.

Testing with Jest

The Redux Documentation declares Jest as the unit testing framework of choice. This is a beginners introduction to Jest.

Taken from the Jest documentation:

Zero configuration – Jest is already configured when you use create-react-app or react-native init to create your React and React Native projects. Place your tests in a __tests__ folder, or name your test files with a .spec.js or .test.js extension. Whatever you prefer, Jest will find and run your tests.

As it turns out, Jest is already integrated into your codebase should you have used create-react-app.

Error fsevents unavailable

When npm run fails with the following output:

npm test

> cryptofrontend@0.1.0 test /Users/bischowg/dev/react/cryptofrontend
> react-scripts test

Error: `fsevents` unavailable (this watcher can only be used on Darwin)
    at new FSEventsWatcher (/Users/bischowg/dev/react/cryptofrontend/node_modules/sane/src/fsevents_watcher.js:41:11)
    at createWatcher (/Users/bischowg/dev/react/cryptofrontend/node_modules/jest-haste-map/build/index.js:780:23)
    at Array.map (<anonymous>)
    at HasteMap._watch (/Users/bischowg/dev/react/cryptofrontend/node_modules/jest-haste-map/build/index.js:936:44)
    at _buildPromise._buildFileMap.then.then.hasteMap (/Users/bischowg/dev/react/cryptofrontend/node_modules/jest-haste-map/build/index.js:355:23)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:160:7)
npm ERR! Test failed.  See above for more details.

read https://github.com/expo/expo/issues/854

npm r -g watchman
brew install watchman

I had to run the brew command three times before it finally worked. npm test should now work without issues.

Testing Action Creators

Given a file actions.js that contains the action creator

function receiveEntriesActionCreator(json) {
  return {
    type: RECEIVE_ENTRIES,
    entries: json
  }
}

you want to write a test that verifies that the action creator returns an action that has a type property with a value of RECEIVE_ENTRIES and an entries property that contains a specific javascript object.

In order to write the test, add a file called actions.test.js next to actions.js. In actions.test.js insert:

import * as actions from './actions.js';

import {
  RECEIVE_ENTRIES,
  ADD_ENTRY,
  UPDATE_ENTRY,
  DELETE_ENTRY
} from './actions.js'

test('receiveEntriesActionCreator returns a correct action', () => {

  const entries = [{ id: '12345', password: 'abcdef' }]

  const expectedAction = {
    type: RECEIVE_ENTRIES,
    entries
  }

  expect(actions.receiveEntriesActionCreator(entries)).toEqual(expectedAction)

});

On line 10, the test() function is called given a description of what the test is trying to verify. The second parameter is the code that the test should execute. Lines 12 to 17 assemble the expected result. On line 19, expect().toEqual() is called.

In the console type npm run to start a watcher that executes the tests after you save your changes to a file that has a unit test.