Node

Devesh Kr Sri
76 min readMar 20, 2021

Interview question

Request ?Response ?Header ?
- Authorixation
- Content Type
-- Accept
Http Status code
1xx informational response – Req. was received, continuing process
2xx successful – Req. was successfully received, undestod, accepted
3xx redirection – further action needs to be taken to complete req.
4xx client error – Req contains bad syntax or cannot be fulfilled
5xx server error – Server failed to fulfil an apparently valid request
100 Continue - The server has received the request headers and the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request).200 OK - successful HTTP requests.
201 Created - Req. fulfilled, resulting creation of a new resource
202 Accepted - delete data and server say ok
300 Multiple Choices - redirection
301 Moved Permanently
302 - Temp moved
304 - Cache request
401 Unauthorized - Similar to 403 Forbidden, Auth token or pws not allowed 
, The response must include a WWW-Authenticate header field containing a
challenge applicable to the requested resource.
402 Payment Required
403 Forbidden - you have provided the correct request but your user are
not alloed to see the perticular request
404 Not Found - The requested resource could not be found but may be
available in the future.
500 Internal Server Error - A generic error message, request was correct but serner unable to process.
502 Bad Gateway - received an invalid resp from the upstream serve
503 Service Unavailable.The server cannot handle the request (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Timeout error - The server was acting as a gateway or proxy and did not receive a timely response from the upstream server. invalud request, large file
cloud flair
cloudfront
E-tag
cache
docker
kubernatices
aws/azure
Testing - mocha, chai, jest
rest api + graphql

Load Balancer :

In the context of AWS (Amazon Web Services), ALB and ELB are types of load balancers that distribute incoming application or network traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in multiple Availability Zones.

ALB - Application load balancer -> APi -(queue) > Task Master - > Pool, 
Type of Queue - rabit queue, kafka - Ex zomantat - > review system to
review and remove the pron words(baker workds)

ELB - Edge Load balanacer - CDN (
  1. ELB (Elastic Load Balancing): This is the general term for AWS’s load balancing service. There are three types of Elastic Load Balancers: Classic Load Balancer (CLB), Application Load Balancer (ALB), and Network Load Balancer (NLB).
  2. ALB (Application Load Balancer): This is a type of ELB that operates at the request level (layer 7), routing traffic to targets based on the content of the request. ALB is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers.

In a Node.js application, you don’t directly interact with ALB or ELB. Instead, you deploy your Node.js application on EC2 instances or containers, and then you configure the ALB or ELB to distribute incoming traffic to those instances or containers.

Here’s a basic example of how you might set up an ALB with a Node.js application:

  1. Deploy your Node.js application on one or more EC2 instances.
  2. Create a target group for those instances.
  3. Create an Application Load Balancer.
  4. Configure the ALB to route traffic to the target group.

Remember, this is a high-level overview. The exact steps can vary depending on the specifics of your application and your AWS configuration.

  1. What is NPM & how its being used with Node JS?
Npm is the package manager for JavaScript based application and have reusable code NPM gives you capability to initialize application using package.jsoNPM and yarn are most popular package manager to manage javascript modules. If you are creating some reusable code and wanted to share with others you can create NPM module and push in to NPM and that module can be used by others.We can install package globally, Locally and install package in dev dependencies. Global packages will be available system wide and can be accessed on system cli like webpack and webpack-dev-server modules.npm init
npm install -save react
npm instal -g webpack
npm instal — save-dev gulp

2. What is Node.JS? When should we use Node JS?

Node.js is a serverside language based on Google’s V8 JavaScript engine.It is used to build scalable programs and needs to run very fast. Its built on top of V8 runtime engine whose baseline in libio & libuv Libraries for c ++.Node js can be used to build API and application required real-time interface like reading live data, streaming data, and doing socket communication.Node js should never be used with CPU intensive tasks like reading huge files and reading lot of data from database and sending it somewhere else.Node.js is a highly efficient which can scale enough & provide non-blocking I/O running on single thread event loop that was built on top of Google Chrome V8 engine and its ECMAScript.Feature of NOde:
1. Node JS provide scalable applications
2. Node JS is server side javascript and single threaded.
3. Node JS adds non I/O blocking platform.
4. Node JS is built on top of v8 chrome engine.
5. Node JS provides faster way to create REST APIs and have good stack of library to support applications.
6. Node JS providing faster application development and can be used in microservices like environments.

3. What is of Node js platform stack, what are different libraries it is using.

Node JS using V8 runtime Engine same as chrome is using, it built on top of same chrome engine.
It’s an open source JIT(Just In Time) compiler written in c++ which has outperformed PHP, Ruby and python performance wise. V8 compiler compiles Javascript directly into assembly level code. V8 Runtime environment comprises into 3 major component
Compiler : dissects the JS codeOptimizer : Optimizer called crankshaft create abstract syntax tree(AST) which further converts to SSA : static single assignment and gets optimizedGarbage Collector : it removes dead objects from new space and puts into old space. Garbage collector play vital role in keeping Node Js lightweight.Base library are c++ library and libeio managing thread pool so javascript engine here is single thread but internally its managing thread pool. Libuv/libio : A C++ library This library handles Node’s asynchronous I/O operation and main event loop. There are thread pool reserve in Libuv which handles the thread allocation to individual I/O operations.On top of c++ library node js has binding with http, socket io binding which are being invoked by core modules of node js like fs, net, dns, socket.io, http.Node js standard library are written in javascript to access c++ library interface and access interface as Node js will be running on server not on simple browser accessing html and javascript..

4. What is Node JS Architecture.

Node is single threaded and based on non I/O blocking way of dealing with operation.It is fast and scalable while running on single thread and doing I/O operation like database read, file read in asynchronous way using event loop.Node JS is single threaded or its javascript interface is single threaded, but this is a half truth, actually it is event-driven and single-threaded with background workers. The Event loop is single-threaded but most of the I/O works run on separate threads, because the I/O APIs in Node.js are asynchronous/non-blocking by design, in order to accommodate the event loop.Node js event loop is feature of Node js base library, Node js is running on single threaded environment and can provide more performance using this single threaded event driven model, Node js manages event loop keeps running in search of asynchronous request.Node js event loop keeps running and whenever any asynchronous request comes its places in event loop if event loop is not busy and gets processed and further notified once execution over, Once you are getting multiple async requests then it pushed all request to event queue and start processing request one by one without blocking the code execution.Event loop is a part of Libeio library and running as single thread keeps waiting for async request ina cycle once it sees anything coming it will process it.

5. Explain in depth about event loop mechanism in Node JS.

The Event Loop is a queue of callback functions. When an async function executes like setTimeout, the callback function is pushed to the queue. The JavaScript engine doesn’t continue processing the event loop until the code after an async function has executed.Event-loop is the main part of the node js system. it keeps running and executing as long as node js process are active in memory. It’s responsible for handling asynchronous operations in application like http call, I/O operation and database read. These all request will be queued to the event loop waiting to be executed on the next free I/O, on execution completion the event loop will got notified to trigger a callback to the main function.request(‘http://www.google.com', function(error, response, body){
console.log(body);
});
console.log(‘Done!’);
In above code example, using request module we are making an http call on google.com url. It is asynchronous operation as you will reading data from network. This task will be pushed to event queue if event loop is busy in processing tasks & further once event loop if free, event queue will send that I/O request to event loop for execution. we will get response from the callback added in this code, it will get executed once we have response from I/O task.Node js runtime execution is not blocked by asynchronous tasks, it will move to the next statement like in above example after running http call using request module it will move to console.log statement. we will see output on console and after sometime we will be notified with this callback with data coming from network request.
  1. What is the role of package.json and what are NPM scripts

This file package.json has the information about the project. this gives information to npm that allows it to identify the project as well as handle the project’s dependencies either local or devdependencies.

Some of the fields are basic information like name, name, description, author and dependencies, script and some meta tags.

if you install application using npm then all the dependencies listed will be installed as well. Additionally, after installation it create./node_modules directory.

Package.json is just json file having meta information about application. Here main part is npm scripts which are important part and will be executing our application with different commands. Like running node js process either we write node index.js on terminal directly or we just run npm run start.

Which one is better

We use npm scripts to automate our tasks and list all them together in npm scripts where we can execute them using npm run , npm scripts are powerful you can add pre and post hooks for these tasks. These will work like task runners we used to have like gulp and grunt.

{  
"name": "node-js-sample",
"version": "0.2.0",
"description": "A sample Node.js app using Express 4",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"dependencies": {
"express": "⁴.13.3"
},
"engines": {
"node": "4.0.0"
},
"repository": {
"type": "git",
"url": "https://github.com/heroku/node-js-sample"
},
"keywords": [
"node",
"heroku",
"express"
],
"author": "Mark Pundsack",
"contributors": [
"Zeke Sikelianos <zeke@sikelianos.com> (http://zeke.sikelianos.com)"
],
"license": "MIT"
}

In this above mentioned package.json we have script tag and having start task, so npm scripts enable execution of this tasks using

npm run start will run node index.js

Question 7

What is Event-driven programming? And how Node JS is event driven language?

From the name itself its clean that what is event driven something driven by some event so in node js we can write events can trigger something once those event occurs

Event-driven programming : Node js is having Event-loop on single thread and there is always one event will be running and always there will be one handle to handle that event without interruption.

Before getting into code of eventEmitter we should understand why event loop is best example of event driven programming.

Event loop perform two operation in a loop

var events = require(‘events’);  
var eventEmitter = new events.EventEmitter(); 1
//Create an event handler:
var myFun = function () {
console.log(‘I hear a voice!’);
}

//Assign the event handler to an event:
eventEmitter.on(‘horror’, myFun); //3
//Fire the ‘scream’ event:
eventEmitter.emit(‘horror’) //2
EventEmitters.emit are Synchronous activities.
  1. Event detection
  2. Event handler triggering

There can be different events like on database read do that or on doing this successfully after that run this code.

Those things can be done using events, event is a core modules where we can Event-Emit from one place and can define Handler which can take care of handling that event.

The example below showing events by extending EventEmitter class, its code sample using classes where we have created a custom event emitter by extending EventEmitter class and emitting the event and capturing it.

const EventEmitter = require(‘events’);  class MyEmitter extends EventEmitter {}  const CustomEmitter = new MyEmitter();  CustomEmitter.on(‘event’, function(a, b) {  
console.log(‘processed in first iteration’,a,b);
});
CustomEmitter.emit(‘event’, ‘Hi’, ‘Hello’);

Question X

CPU intensive tasks can be done with worker thread come in the latest version

Question X -Built-In Module

JavaScript vs Node ->

On the front end Browser have the environment to run JS and on backend, node provides the Environment to run so that JS can access server-side code like read files and network.

Stream — one of built-in modules for node to handle streaming data. Node has a readable and writable stream. readable stream have response object and writable have request object.

Global Scope: The window in the front end is equivalent to global in backend. Don’t require module to be added we can directly use it. ex: global, console, process, setInterval, setTimeout, __dirname, __filename etx.

Question 8

How Node JS V8 runtime is different from what we have on chrome console

Chrome console and Node js both are using V8 javascript runtime engine but major difference is Node JS using other c++ core libraries to manage http and socket communication and Chrome V8 engine is mainly browser oriented environments where node js is browserless environment mainly CLI based to execute tasks.

On browser we have access to window, document and console objects and in Node js we don’t have document & window objects available, it is server side runtime environment which can be executed from command line. Node js mainly used for creating HTTP server, socket communication or reading or writing real time data.

Question 9

What is the difference between Asynchronous function, Synchronous function or pure functions

Synchronous function : Those function which do simple execution and don’t deal with I/O, These are simple function where we can predict output also and have only basic operation having data manipulation

Asynchronous Function : Special function which deals with network I/O operations like database read, file read or getting data from some api. These operation always takes time while executing and you do not receive instant response from these api.

https://www.youtube.com/watch?v=BXqDxpI7XwU

Node is faster → due to event loop, its high performance and scalable system. Since working on single thread that is main thread and it never get blocked and all async operation done by worker thnread.

simple example asynchronous code sample :

var userDetails;  

function initialize() {
// Setting URL and headers for request
var options = {
url: 'https://api.github.com/users/narenaryan'
};

return new Promise(function(resolve, reject) {
// Do async job
request.get(options, function(err, resp, body) {
if (err) {
reject(err);
} else {
resolve(JSON.parse(body));
}
})
})
}

// Synchronous code samplefunction foo(){}
function bar(){
foo();
}

function baz(){
bar();
}

baz();

Question 10

what are different options to write asynchronous code in Node.

Using setTimeout we can run some code after defined time

Using callback : return function from another function after asynchronous task is over.

Using Async module : async module in node js

Using promises : using native promises and wait until promise is resolved

PROMISE AND ASYNC WORK THOUGH THE EVENT LOOP

Exampe : in promise you can check the userid and password if the result are success, in then stage redirect to home page else login failed

promise chain:

Using async/await : write less line of code by using async await

Promise Vs Async Await (diffrence)

Both Promises and async/await are used to handle asynchronous operations in JavaScript, but they are used in slightly different scenarios:

  1. Promises: Promises are used when you want to handle asynchronous operations in the future. They can be in one of three states: pending, fulfilled, or rejected. Promises are typically used when you want to do something with the result of an asynchronous operation before it has completed. Here’s an example:
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => console.log(data))
.catch(error => console.error('Error:', error));

2. Async/Await: Async/await is just syntactic sugar on top of Promises. It makes your asynchronous code look and behave a little more like synchronous code, which can make it easier to understand and reason about. Here’s the same example using async/await:

async function fetchData() {
try {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
console.log(data);
} catch (error) {
console.error('Error:', error);
}
}
fetchData();

In general, you might choose to use Promises if you’re dealing with a single asynchronous operation, or if you’re dealing with multiple asynchronous operations that do not depend on each other.

You might choose to use async/await if you’re dealing with multiple asynchronous operations that depend on each other (i.e., you need the result of one operation to start the next). Async/await can make this type of code easier to write and understand.

Node JS Interview Question — Set #01

Question 11

how single threaded Node JS handles concurrency when multiple I/O operations happening

Node provides a single thread to the code we are writing so that code can be written easily and without bottleneck and without I/O blocking Node internally uses multiple POSIX/unix threads for various I/O operations such as network read, database read or file read operations.

When Node Api or code get I/O request it creates a thread from thread pool to perform that asynchronous operation and once the operation is done, it pushes the result to the event queue. On each such event, event loop runs and checks the queue and if the execution stack of Node is empty then it adds the queue result to execution stack.

This is how Node manages concurrency.

Reference : https://strongloop.com/strongblog/node-js-is-faster-than-java/

Request modeling is different, In node js its not creating separate thread for every request, just running event loop and delegating to async thread for running async tasks.

Question 12

What is Node JS callback and how it helps to run Async tasks

It is used to handle Async call, it passes funcation as parameters. Callback is a Node.js function which can be created with plain javascript code. While writing callback you don’t need node js environment. It’s just higher order function which takes function as argument and return callback function . This function is used to avoid I/O or network blocking and allows next instruction to run

Node JS handle all asynchronous calls via callback natively. Callbacks are just functions that you pass to other functions. which we call as higher order function in javascript. Example on callback is while reading file using fs node js core module we pass first argument is the path of the file and second argument is a function, which is nothing but a callback function

Node.js use callback function extensively. Node APIs are written to support callbacks of Node JS.

var fs = require(“fs”);  
fs.readFile(‘app.txt’, function (err, data) {
if (err) return console.error(err);
console.log(data.toString());
});
console.log(“Program Ended”);
Example 1:
setTimeout(()=>{
console.log("hello")
},1000)
Example 2:
function add(a,b, callback){
console.log(`the sum of ${a} and ${b} is ${a+b}`)
callback();
callback1();
}
function desp1(){
console.log("hello")
}
add(5,6,desp1)Example 3:
$("#btn_1").click(function(){
alert("btn is clicked")
})

Question 13

Explain basic async apis in javascript like setTimeout, setImmediate & setInterval

The setTimeout & setInterval are the 2 timers functions.these function are being used to create timer function.

setTimeout :- This function is used to delay the execution of code written under it. This will execute once after the defined delay in milliseconds.

After 1 second we will get console message saying “Hello”

function sayHi() {  
console.log(‘Hello’);
}
setTimeout(sayHi, 1000);setInterval :- if you want to execute a function many times or unlimited times then we can use .setInterval() by passing the interval duration.

This function will keep executing after every second and print value of i on console

let i = 0;  
function increment() {
i++;
_console_.log(i);
}
var myVar = setInterval(increment, 1000);
// clearInterval(myVar);

clearInterval method will be used to clearout the execution of method from setInterval

setImmediate() and setTimeout() are based on the event loop.

Another important method in setImmediate, we use setImmediate if we want to queue the function behind whatever I/O event callbacks that are already in the event queue.

we can use process.nextTick to effectively queue the function at the head of the event queue so that it executes immediately after the current function completes. It queues them immediately after the last I/O handler somewhat like process.nextTick. So it is faster.

Question 14

What is REPL in Node JSand how it helps to run code.

Node.js comes with environment called REPL (aka Node shell). REPL stands for Read-Eval-Print-Loop, its easiest way to run node js code on console.

The repl module provides a Read-Eval-Print-Loop (REPL) implementation that is available both as a standalone program or includible in other applications. It can be accessed using:

will give you a result which is similar to the one you will get in the console of Google Chrome browser, it look like chrome console without browser based apis

const repl = require(‘repl’);

Question 15

What are core and important modules in node JS which used frequently and explain difference between core module and user defined module

Node.js being lightweight framework. All the core modules include minimum functionalities of Node.js. These core modules are compiled into its binary distribution and load automatically when Node.js process starts

Cluster The cluster module helps us to create children process that runs simultaneously and share the same server port. So using this we can create process running on same port, As node js is single threaded and efficiently use memory by consuming single core only but when we are running on multi core system in such case to take advantage of multi-core systems Cluster module allows you to easily create child processes that each runs on their own single thread, to handle the load.

Crypto — To handle OpenSSL cryptographic functions
Dgram — Provides implementation of UDP datagram sockets
Dns — To do DNS lookups and name resolution functions
Domain — Deprecated. To handle unhandled errors
Events — To handle events
Fs — To handle the file system
Http — To make Node.js act as an HTTP server
Https — To make Node.js act as an HTTPS server.
Net — To create servers and clients
Os — Provides information about the operation system
Path — To handle file paths

All above mentioned are code module in node js as they are coming bundled with node js installation. User defined module are those which we are creating in node js application and writing module.export in file and again making them require in another file.

Reference : https://nodejs.org/api/synopsis.html

Question 16

Explain events in Node JS and how events are helping us to create event driven system.

every action has reaction similarly every event has it handler to catch the action and take action on it.

In node js we have event emitter which is used to emit event and further that event can be captured to perform some operation.

Events can be compared by simple socker.io example when server broadcast a message it can be captured by all client who are subscribing to that channel.

Node.js has a built-in core module known as “Events”, where you can create, fire, and listen for your own events.

Its plain and simple we create events and written handler at another place to capture that event

To include the built-in Events module use the require() method. All event properties and methods are an instance of EventEmitter object available on events so to be able to access these properties and methods we need to create an EventEmitter object:

var events = require(‘events’);  
var eventEmitter = new events.EventEmitter();

Here is the example where we are emitting event name “scream” and there is a handler my eventHandler which is capturing this event and processing task. These events are like fire and forget you fire and forget them.

var events = require(‘events’);  
var eventEmitter = new events.EventEmitter();//Create an event handler:
var myEventHandler = function () {
console.log(‘I hear a scream!’);
}//Assign the event handler to an event:
eventEmitter.on(‘scream’, myEventHandler);//Fire the ‘scream’ event:
eventEmitter.emit(‘scream’);

Question 17

How we can read file in synchronous way & asynchronous way using fs module.

The normal behaviour in Node.js is to read in the content of a file in a non-blocking,asynchronous way. That is to tell Node to read the file and return callback once you are done with reading file, there are different events also like file read started or file read over.

We use core module fs and fs.readfile provides asynchronous way of reading file

For this we can use the readFile method of the fs core module

Node js core module fs takes 3 arguments, These arguments are name of the file (‘app.txt’ in this case), the encoding of the file (‘utf8’), and a callback function as argument. This function which is a callback function going to be called when the file-reading operation has finished and we will see file content printed on terminal.

As this operation is non I/O blocking and asynchronous in nature, this will be process using event loop but execution will not be blocked we will see message “file read is over” before getting file contents on terminal

var fs = require(‘fs’);  
fs.readFile(app.txt, ‘utf8’, function(err, contents) {
console.log(contents);
});
console.log(‘file read is Over’);

There is another way to use blocking file read operation using fs.readFileSync which will read file in synchronous way and it block the execution until unless file read operation is over.

var fs = require(‘fs’);  
var contents = fs.readFileSync(‘app.txt’, ‘utf8’);
console.log(contents);console.log(‘file read is Over’);

Question 18

How to capture command line arguments while executing node js process.

The arguments are stored in process.argv when you pass args with node command

like node index.js “hello” “world”

[runtime] [script_name] [argument-1 argument-2 argument-3 … argument-n]

process.argv is an array containing the command line arguments. The first element will be ‘node’, the second element will be the name of the JavaScript file. The next elements will be any additional command line arguments.

we can get all arguments printed using this Loop

and we can pass information to an application before it starts. This is particularly useful if you want to perform some settings before starting application like passing env and port

process.argv.forEach(function (val, index, array) {  
console.log(index + ‘: ‘ + val);
});

Question 19

What is error check first in callback handler defined in Node JS code.

generally, the first argument to any callback handler is an an error object and there is a reason to pass first argument as error object in callback handler which can be either null or error object so while dealing with callback we can check if we have received null or some error object

if we get error object then we will perform our action based on error. Error handling by a typical callback handler could be as follows:

function callback(err, results) {  
// usually we’ll check for the error before handling results
if(err) {
// handle error somehow and return
}
// no error, perform standard callback handling
}

This is applied to all callbacks we write in our code and its part of ESLint configuration which force developer to write code in this way.

Question 20

What are different module pattern in javascript, can you explain common JS modules.

In JavaScript, the word “modules” refers to small units of independent, reusable code. They are the foundation of many JavaScript design patterns and are critically necessary when building any non-trivial JavaScript-based application.

We have different module pattern in javascript like commonjs, AMD, UMD and ES6 modules

A CommonJS module is essentially a reusable piece of code can be fetched from either nomjs.com repository or created locally. From module we can exports specific objects, making them available for other modules to require in their programs. While writing node js code you have seen this and may be very familiar with this format.

Using Common JS every JavaScript file stores modules in its own unique module context (just like wrapping it in a closure).We use module.export to export module and require to require that module in another file.

module.export and require syntax we use everywhere in Node js Code, all common js modules are imported in such a way only

var app = require(‘./app’)  
function myModule() {
this.hello = function() {
return ‘hello!’;
}
this.goodbye = function() {
return ‘goodbye!’;
}
}
module.exports = myModule;

Question 21

What is callback hell and how can it be avoided any library which can be used and how to promisify the library

Callback hell refers to a coding style which we use like when we add nested callbacks in application, Lot of nesting of callback functions create callback hell. The code it becomes difficult to debug and understand in such cases we can use other library to overcome with callback hell problem

a. Using promises
b. Yield operator and Generator functions from ES6
c. Modularising code
d. Using async library using async waterfall
e. by not doing nested callback

In javascript most of library supports callback way of writing code like redis-client, mysql-client all such library are callback based so better if we promisify these library and use them with promises

We can use util core module to promisify module in node js like we can promisify fs module which provide callback based operation fileread.

Now callback based file read write operation become promise based and we need to do “.then()” to capture response from resolved promise.

Question 22

What are promises and how to use promises for simple AJAX call or for multiple AJAX calls

Promises give an alternate way to write asynchronous code and it gives advantages over callback.

We can use promises instead of using callbacks. Promises either return the result of execution or the error/exception. Promises have different state resolved, rejected or pending state. Once promise resolved or rejected .then method gets triggered Promises simply requires the use of <.then()> function which waits for the promise object to return and once you have final state it gets executed. Promise.then() function takes two arguments, both are callback functions first for success callback and another error callbacks.

readFileAsync(filePath, {encoding: ‘utf8’})  
.then((text) => {
console.log(‘CONTENT:’, text);
})
.catch((err) => {
console.log(‘ERROR:’, err);
});
function readFileAsync(){
return new Promises(function(resolve,reject){
resolve(‘some data’)
})
}

Question 23

What is global object in Node JS & how it can used to manage environments in application.

The Global keyword represents the global namespace object, we can get what is in global by opening node terminal or you can do console.log(global) in your application

when we declare variable using let/const those are module specific modules but when we declare without using let/var they gets added to global namespace of application.

We can also add few things on global object like runtime environment configuration

Process, buffer is also part of Global object

Process in very big object having all information about process here we also have global.process.env where we can manage environment specific configuration

In our application we pass these env variable while running node application like

Node index.js node_env=local port=5009

In this example we will get local when we try to get value of process.env.node_env and will get 5009 when we do process.env.port in our code.

Question 24

What are streams and how it’s different from normal api response

Streams are just flow of data, steam pipes that let you easily read data from a source and pipe it to a destination. so stream are easy to pipe from one source to another, A stream is nothing but an EventEmitter and implements some specials methods. Depending on the applied methods in node js code , a stream becomes Readable, Writable, or Duplex (both readable and writable).

There are different use cases of streams we can use stream to pipe response to api server.

For example we can create file reader stream which read file until data in completed from that file and during that we have some events also like read start or data event or error events

var fs = require(“fs”);  
var data = ‘’;
var readerStream = fs.createReadStream(‘input.txt’);
readerStream.setEncoding(‘UTF8’);
readerStream.on(‘data’, function(chunk) {
data += chunk;
});
readerStream.on(‘end’,function(){
console.log(data);
});
readerStream.on(‘error’, function(err){
console.log(err.stack);
});

Question 25

How we can have separate config for development & production

environments, configuration file like which manages database connection

There are different option either you can use .dotenv module to manage configuration for application runtime.

We can manage different config file based on different environments like dev.properties qa.properties file

At runtime we should pass process.env.NODE_DNV as either development or production so in code we can load appropriate env file and further we can load its configurations like mongodb url, Mysql connection url which will be different for development and production

we can require that file and can get configuration object and pass them wherever required.

var config = {  
production: {
mongo : {
url: ‘’
}
},
dev: {
mongo : {
url: ‘’
}
}
}
exports.get = function get(env) {
return config[env] || config.default;
}
const config = require(‘./config/config.js’).get(process.env.NODE_ENV);
const dbconn = mongoose.createConnection(config.mongo.url);

Question 26

What are the modules type which node js currently supporting.

In Javascript we have modules like es6, commonjs, AMD,UMD.

The obvious one for Node JS is CommonJS, which is the current module system used by Node JS (the one that uses require and module.exports). CommonJS already is a module system for NodeJS, and ES Modules has to learn to live side by side and interoperate with it

whatever we do today using module.export & require all are common js modules available on npm repository

All module downloaded from npmjs.com or npm repository supports common js style of require and exports

Till now if we want to use ES6 modules in ES6 like using import and export syntax we have to use babel polyfill like babel-register or babel-node as there is no native support for ES6 modules in node js

Node js native doesn’t support ES6 modules or code like import/export

But now native support is coming but with slight change, these will be called ESM module with extension of .esm

export const spout = ‘the spout’  
export const handle = ‘the handle’
export const tea = ‘hot tea’import {handle, spout, tea} from ‘./01-kettle.mjs’console.log(handle) // ==> the handle
console.log(spout) // ==> the spout
console.log(tea) // ==> hot tea

Question 27

what is the differences between promises, callback & async await

Async await has been introduced recently and powerful tool to write asynchronous code in synchronous fashion, async await code look like simple synchronous code and it blocks the event loop and implemented on top of promises only.

Async functions

For async we just need to add it before function name as “async” function

async function f() {  
return 1;
}

Async before a function always returns a promise. If the code has return in it, then JavaScript automatically wraps it into a resolved promise with that value.

let value = await promise;

The keyword await makes JavaScript wait until that promise settles and returns its result.

Here’s example with a promise that resolves in 1 second:

async function f() {  
let promise = new Promise((resolve, reject) => {
setTimeout(() => resolve(“done!”), 1000)
});
let result = await promise; // wait till the promise resolves (*)
alert(result); // “done!”
}
f();

Promises are another important tool in javascript to manage asynchronous code.

Promises either return the result of execution or the error/exception. Promises have different state resolved, rejected or pending state. Once promise resolved or rejected .then method gets triggered, Promises simply requires the use of <.then()> function which waits for the promise object to return and once you have final state it gets executed. Promise.then() function takes two arguments, both are callback functions first for success callback and another error callbacks.

Promises allow us to cleanly chain chain subsequent operations while avoiding callback hell and as long as you always return a promise for each of your then blocks, it will continue down the chain.

Promises are now supported in native code so no need to add external library. Promises are just representation of asynchronous code which can be in resolved/rejected state and accordingly .then function will execute with proper callback

Little bit about callback

Node handles all asynchronous operation using callback natively. Callbacks are just functions that you pass to other functions. Example like while reading file using fs core module we pass first argument is the path of the file and second argument is a function, which is nothing but a callback function

Question 28

What is event loop, is it part of V8 runtime environment and also available on browser.

The event loop is provided by the libuv library. It is not part of V8 runtime env.

it is single thread entity keeps running in node js process and keep listening the event queue.

The Event Loop is the entity that handles external events and converts them into callback invocations. It is a loop that picks events from the event queues and pushes their callbacks into the Call Stack.

There is only one thread that executes JavaScript code and this is the thread where the event loop is running. The execution of callbacks is done by the event loop.

To understand more on event loop you can use this reference link

https://medium.com/the-node-js-collection/what-you-should-know-to-really-understand-the-node-js-event-loop-and-its-metrics-c4907b19da4c

Question 29

what will be the state of Node JS process when event loop is empty and call stack both are empty.

In such case node js process will exit as it has nothing to process nothing to execute, it is different from any other process, when we start node js process it will start event loop and if there is nothing to execute in event loop it will exit from there

To prevent this situation we always create http server where server keeps telling event loop for listening http request events and event loop is not totally idle.

var http=require(‘http’)  
var server=http.createServer(serverFn)
server.listen(7000);

Question 30

What is call stack, is it part of V8 runtime environment ?

Yes call stack is a part of Javascript having either chrome V8 engine or chakra engine.

The JavaScript engine V8 is a single threaded interpreter having a heap and a single call stack. The browser additionally provides some APIs like the DOM, AJAX, and Timers. This is main core functionality which is responsible to execute functions using heap and call stack (Stack which use LIFO pattern)

Call stack operates by principle of Last In, First Out known as LIFO, it says that the last function that gets pushed into the stack is the first to be pop out, when the function returns.It pure stack like data structure based operation.

We can see that in below example where function calling each other.

  1. When secondFn() gets executed, an empty stack frame is created. It is the main entry point of the program.
  2. secondFn() then calls firstFn() which is pushed into the stack using LIFO pattern.
  3. firstFn() returns and prints “Hello from firstFn” to the console.
  4. firstFn() is pop off the stack.
  5. The execution order then move to secondFunction().
  6. secondFn() returns and print “The end from secondFunction” to the console.
  7. secondFn() is pop off the stack, clearing the memory occupied.
function firstFn(){  
console.log(‘hello from first Fn’);
}
function secondFn(){
firstFn();
}
function thirdFn(){
secondFn();
}
thirdFn();

Question 31

Have you used yarn as package manager and how its different from NPM

Yarn is just another package manager for installing and managing javascript libraries for application. Yarn is using the same registry that npm does. That means that every package that is a available on npm is the same on Yarn.

To add a package, run yarn add .

If you need a specific version of the package, you can use yarn add package@version

yarn also have init command

The yarn init command will walk you through the creation of a package.json file to configure some information about your package. This is same as we do npm init using npm package manager

these are following differences we can see

Yarn has a few differences from npm. Yarn is doing caching of all installed packages. Yarn is installing the packages simultaneously and when we install packages with Yarn it look like faster than NPM. Both yarn and NPM downloading packages from same NPM repository.

On the contrary to npm, Yarn provides stability, putting lock on versions of installed packages. The speed is higher while installing packages. It is very important for big projects, which have more dependencies

Question 32

What are the tools to deploy node js application on server.

There are many popular tool to deploy node js app on server which will keep node js application up and running and if there is any issue it will restart process like PM2, forever, supervisord

PM2 provides :

  1. Built in load balancer
  2. Multiple instance of application running on same port.
  3. Can run application in cluster mode.
  4. Can manage deployment of multiple application using single config.
  5. Provides multiple deployment options.
  6. Provides zero downtime on application deployment.

If you use pm2, you can easily hook it with keymetrics.io monitoring tool to see api statics.

npm install -g pm2
pm2 start app.js

Zero-config Load-Balancer Link

PM2 enable use to create multiple instance to scale up your application by creating instances that share the same server port. Doing this also allow you to restart your app with zero-seconds downtimes.

PM commands to start/stop/delete application instance

pm2 start app.js — name “my-api”

pm2 start web.js — name “web-interface”

pm2 stop web-interface. …

pm2 restart web-interface. …

pm2 delete web-interface. …

pm2 restart /http-[1,2]/ …

pm2 list # Or pm2 [list|ls|l|status]

pm2 list command showing all available instances and pm2 monit command showing monitoring for all running instances

PM2 has lot of advantages over other tools it gives you everything and its is industry standard for node js deployments.

Question 33

How to graceful shutdown your Node JS process when something bad happens in code like database connection lost.

graceful shutdown means whenever node js process shut down in that case we need to shut process down gracefully by closing all db connection by releasing TCP port and releasing all occupied resources so when node process comes up again there should be no any issues

graceful shot down can be done in different way

Shutdown can happen using some code issue like unhandled promise rejection, some javascript code null check missing or database shutdown or forcefully by user using Ctrl + C

process.on(‘unhandledRejection’, ErrorHandler.unhandledRejection);  
process.on(‘SIGINT’, ErrorHandler.shutdown);
process.on(‘uncaughtException’, ErrorHandler.onError);process
.on(‘unhandledRejection’, (reason, p) => {
console.error(reason, ‘Unhandled Rejection at Promise’, p);
// release database connection
// release resources
});

Question 34

How can you make sure of zero downtime while Node JS deployment of your application

zero downtime means whenever we deploy node js application it stop the application and then start it again in this case there is a downtime of some seconds until application gets connected to database like redis, mysql. Zero downtime can’t be achieved with single instance when we have huge traffic coming for APIs.

PM2 is powerful tool to manage multiple instances running multiple core of one machine. On multi core system we should always have multiple instance of PM2 to optimally consume multiple cores of system.

  1. For having zero downtime we should run application in cluster mode which allows networked Node.js applications (http(s)/tcp/udp server) to be scaled across all CPUs available, without any code modifications. This greatly increases the performance and reliability of your applications, depending on the number of CPUs available

To enable the cluster mode, just pass the -i option:

pm2 start app.js -i max ( max means that PM2 will auto detect the number of available CPUs and run as many processes as possible)

For zero downtime we should use pm2 reload not restart command, reload is different from restart as it will start reloading one by one and not doing reload on all instance together. pm2 restart app-name ,which kills and restarts the process. pm2 reload app-name which restart your workers one by one, and for each worker, wait till the new one has spawned before killing the old one. Using this reload we can serve request by live workers without having any issue in api-services.

This below mentioned configuration “ecosystem.config.js” will create max number of instances based on available core on system and run all instances in cluster mode.

pm2 startOrReload ecosystem.config.js — update-env

This command will start max number of instance if they have not been created yet or reload the existing created instances. — update-env parameter will reload instance with some newly added configuration.

Ecosystem.config.js file:module.exports = {  
apps: [
{
name: ‘api_app’,
script: ‘app/server.js’,
instances: “max”
}
]
};

Question 35

What is the use of cluster module and how to use it.

cluster module provide a way to create child process. In some cases we may need to have a child process for running some independent process and want to distribute some process. For that purpose we can use Cluster module which will create another child process and that process is created using like forking a process.

The cluster module is core node js module like fs,net module. Cluster module contains a set of functions and properties that help us forking processes to take advantage of multi-core systems. Node js runs on single core system but when we have multi core system and in that case to use that multi core system we should create child process which are equal to number of processor in system

const cluster = require(‘cluster’);  
const http = require(‘http’);
const numCPUs = require(‘os’).cpus().length;
if (cluster.isMaster) {
masterProcess();
} else {
childProcess();
}
function masterProcess() {
console.log(`Master ${process.pid} is running`);
for (let i = 0; i < numCPUs; i++) {
console.log(`Forking process number ${i}…`);
cluster.fork();
}
process.exit();
}

Question 37

How can you debug you Node JSapplication.

If you are using vscode like tool for writing node js apps then debugging is easier you can just run debugger in few simple steps.

To start debugging, run your Node.js application with the — inspect flag.

$ node — inspect <your_file>.js

And you need to add chrome dev tools for that so you can launch debugger on chrome on some port and can do debugging like you do for client side javascript code.

Another option is node-inspector

$ npm install -g node-inspector

$ node-debug app.js

where app.js is the name of your main Node application JavaScript file.

The node-debug command will load Node Inspector in your default browser.

Question 38

what are streams and why we should use it with large data.

Streams are collections of items such as we have array as collection. The difference is that streams data might not be available all at same time and not necessary they all fit in existing memory. Stream data are not for synchronous execution. Stream are meant to be received over the time asynchronously.

Stream are powerful tools to send response from apis when you have big data to send.

This makes streams really powerful when working with large amounts of data, or data that’s coming from an external source one chunk at a time like file reading of big size. In that case we have to read file in chunks and need to stream that data to send in response.

They also give us the power of sending data in chunks. Just like we use pipe command in linux and send data of one command to another command, we can do exactly the same in Node with streams.

const fs = require(‘fs’);  
const server = require(‘http’).createServer();server.on(‘request’, (req, res) => {
fs.readFile(‘./app.txt’, (err, data) => {
if (err) throw err;

res.end(data);
});
});// using streams //
server.on(‘request’, (req, res) => {
const src = fs.createReadStream(‘./xpp.txt’);
src.pipe(res);
});Question 39

Question 39

What are code module in Node JS explain few of them and their use.

‘Events’,

‘fs’,

‘http’,

‘https’,

‘module’,

‘net’,

‘os’,

‘path’,

‘stream’

child_process

The child_process module provides the ability to spawn child processes in a manner that is similar, but not

identical, to popen(3).

https://nodejs.org/api/child_process.html

cluster

A single instance of Node.js runs in a single thread. To take advantage of multi-core systems the user will

sometimes want to launch a cluster of Node.js processes to handle the load. The cluster module allows you to

easily create child processes that all share server ports.

https://nodejs.org/api/cluster.html

Events

Much of the Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain

kinds of objects (called “emitters”) periodically emit named events that cause Function objects (“listeners”) to be Called.

https://nodejs.org/api/events.html

fs

File I/O is provided by simple wrappers around standard POSIX functions. To use this module do require(‘fs’). All the methods have asynchronous and synchronous forms.

https://nodejs.org/api/fs.html

http

The HTTP interfaces in Node.js are designed to support many features of the protocol which have been traditionally difficult to use. In particular, large, possibly chunk-encoded, messages.

readline

The readline module provides an interface for reading data from a Readable stream (such as process.stdin) one line at a time.

repl

The repl module provides a Read-Eval-Print-Loop (REPL) implementation that is available both as a standalone program or includible in other applications.

Question 40

Explain event loop lifecycle and explain few events inside event loop.

In latest release of V8 event loop is also available in JavaScript engine (v8, spiderMonkey etc). Event loop is part of Libuv library & in reality event-loop is the master which uses the JavaScript engines to execute JavaScript code. event loop runs on separate thread.

When you run node index.js in your console, node start the event-loop and then runs your main module main module from index.js outside the event loop. Once the main module is executed, node will check if the loop is alive if event loop is not alive then node js process simply exit otherwise it will keep listening the event queue

At some point during the event loop, the runtime starts handling the messages on the queue, starting with the oldest one. Once that execution is over event is emitted about completion of task and handed over to handler.

Question 41

How to prevent Unhandled Exception in Node JS and if they are occurring how to handle them

Node.js event loop runs on a single thread and uncaught exceptions are critical issue and need to be aware of when developing applications.

Silently Handling Exceptions

Most of the people let node.js server(s) silently swallow up the errors.

Silently handling the exception

process.on(‘uncaughtException’, function (err) {  
console.log(err);
});

This is bad, it will work but:

In this case Root cause of problem will remains unknown, as such will not contribute to resolution of what caused the Exception (Error ).

In case of node application having database connection ( pool ) which gets closed for some reason will result in constant propagation of errors, meaning that server will be running but it will not reconnect to db. So you should write a code which can manage and do not generate Unhandled exception and for debugging purpose you can caught those process and identify the cause.

Question 42

How to convert callback based library to promise based library so instead of writing callback we can write promises.

This is an important aspect and this is being used for many library like redis-client, mysql-client, we can use bluebird.promisifyAll to convert any callback based library to promise based.

db.notification.email.find({subject: ‘promisify callback’}, (error, result) => {  
if (error) {
console.log(error);
}
// normal code here
});

It is using bluebird’s promisifyAll method to promisify what is conventionally callback-based library.

After applying these change promise-based methods names wil have Async appended to them:

let email = bluebird.promisifyAll(db.notification.email);  
email.findAsync({subject: ‘promisify callback’}).then(result => {
// normal code here
})
.catch(console.error);

Same thing can be done for redis library which is callback based library

const redis = require(‘redis’);  
bluebird.promisifyAll(redis);
client.getAsync(‘data:key’).then(function(res) {
console.log(res); // => ‘bar’
});

Question 43

What is global object in Node JS how to add object to global variable & how it’s different from browser global environment.

In browsers when we do console.log(this) it represent the window object and in node JS the global scope of a module is the module itself, so when you define a variable in the global scope of your node JS module, it will be local to this module.

In node js global represents global scope only you can add variable in global object it will be available on node js process and can be accessed in any other local module.

We can add some common configuration to global object like mysql connection object or logger object as we will using these objects on different places in application.

const mysql = require(‘mysql2’);  
global.connection = null;
try {
global.connection = mysql.createConnection(global.configuration.db);
} catch (err) {
throw Error(err);
}
global.connection.connect((err) => {
if (err) throw err;
});
global.connection.on(‘error’, (err) => {
logger.info(`Cannot establish a connection with the database (${err.code})`);
});
module.exports = global.connection;

global The global namespace object.

In browsers, the top-level scope is the global scope. That means that in browsers if you’re in the global scope var something will define a global variable. In Node.js this is different. The top-level scope is not the global scope; var something inside an Node.js module will be local to that module.

Question 44

What is circular dependency in modules while executing Node JS and how to fix this issue

In this example there two file which are doing require each other foo file one is requiring file and same thing is happening with bar.js

It’s like circular dependency & these is no issue it will compile as expected but at run time behaviour is totally different then expected.

foo is being imported in bar module & bar is being imported in foo module

The important part is it is being done synchronously,no module.export calls have yet been made!. So when bar.js was required intofoo.js, it immediately tried to require bar.js back into itself, but foo.js hasn’t been exported yet. The result is that foo.js only has a reference to an empty object!

This is called circular dependency and you should import export modules not in circular way.

https://www.npmjs.com/package/madge can be used to detect such kind of circular dependencies

// hello.js  
var bar = require(‘./foo.js’);
console.log(‘class b:’, bar);
var foo = function() {
this.bInstance = bar();
this.property = 5;
}
module.exports = foo;// hello1.js
var foo = require(‘./foo.js’);
var bar = function() {}
bar.prototype.doSomethingLater = function() {
console.log(foo.property);
}
module.exports = bar;

Question 45

What is the Node JS require cache and how to invalidate it

when we require module in one file it gets cached in node js and its gets passed to another module requiring same module again this is require cache which node js maintain in node js execution cycle. It can also be invalidated when needed

and Yes, you can access the cache via require.cache[moduleName] where moduleName is the name of the module you wish to access. Deleting an entry by calling delete require.cache[moduleName] will cause require to load the actual file.

So from the above description we can say yes, you can invalidate cache.

The cache is stored in object require.cache which you can access according to filenames (e.g. — ~/home/index.js as opposed to ./home which you would use in a require(‘./home’) statement).

Question 46

What is the best way to add security to apis in Node JS application, what are some well known module should can be used.

For security we should take care of lot of things but here let’s see how we can make apis more secure

  • Don’t use deprecated or vulnerable versions of Express
  • Use latest version of NPM & run ( npm audit )
  • Use Helmet, csurf module for adding security
  • Add security to the cookies by signing cookies or making them secure.
  • Add rate limiter to prevent brute force attack
  • Send Security HTTP Headers
  • Prevent SQL injection in your code.
  • APIs should be running on HTTPS
  • run npm audit regularly

Secure Express.js Sessions and Cookies

Session cookie name reveals your application’s internals

Revealing what technologies you are using for your application is one of the key things that you should not do. If an attacker knows what kind of technology you are using, he can drastically reduce his scope in finding vulnerable components in your application. There are a couple of ways which reveals internal implementation details of your application. One of them is the session cookie name of your application.

Make cookies more secure

When you use cookies in your application make sure to add HttpOnly flag to the cookies. Adding HttpOnly flag makes sure that no external script other than an HTTP connection can fetch cookies in your application. This is a good protection mechanism against cross site scripting attacks where attackers read your cookies through malicious scripts.

Signing cookies

Signing cookies provide prevention of cookie forging. A signed cookie is a value which has the cookie value as well as a digital signature attached to itself. Once the cookie is received from the server end, the server will validate the integrity of the cookie by validating its signature

https://medium.com/@tkssharma/secure-node-js-apps-7613973b6971

app.disable(‘x-powered-by’);  
app.use(helmet());
app.use(helmet.noCache({ noEtag: true })); // set Cache-Control header
app.use(helmet.noSniff()); // set X-Content-Type-Options header
app.use(helmet.frameguard()); // set X-Frame-Options header
app.use(helmet.xssFilter()); // set X-XSS-Protection header
app.enable(‘trust proxy’, [‘loopback’, ‘linklocal’, ‘uniquelocal’]);
app.use(expressSession({
name: ‘SESS_ID’,
secret: configServer.SESSION_SECRET,
proxy: true,
resave: true,
saveUninitialized: true,
}));

Question 47

How error handling can be done in Node JS api app like handling api errors.

Error handling is a feature of express which can be done by using some middleware. Error can be some defined errors codes or some errors are runtime errors coming from database, redis or some other data store.

Server Errors : There are many error codes which are returned from API like 301,404,403 or 500.

API Error Codes : In APIs we can return error codes with our own message to notify client about situation at server side like “user data not found”, These are some known errors which we are throwing explicitly from code but what should we do with runtime errors like some database connection issue, some column is missing or query is not correctly written.

router.get(‘/users/:id’, function(req, res, next) {  
var user = users.getUserById(req.params.id);
if (user == null || user == ‘undefined’) {
res.status(400).json({‘message’ : ‘user not found’});
}
});

For handling such unknown runtime errors we can just create middleware and this middleware will have error object additional which will give us detail about what error occured with stack trace & status code.

Create error handler middleware as global Middleware and register this middleware with app instance.

class errorHandlers {  
static internalServerError(err, req, res, next) {
res.status(500).json({
success: false,
message: err.message,
error: err.stack ,
});
}
}
module.exports = errorHandlers;
// in your server.js
app.use(errorHandlers.internalServerError);

Question 48

How to create basic middleware and how to register middleware in application.

Middleware functions are simple javascript functions that have access to the request response object and the next function during application request response cycle. In middleware we do pre and post processing with request and response object.

We can talk about basic example of pre processing requests like

  1. HTTP post method should not have empty body an example of pre processing of request
  2. HTTP Methods should have application/json as content type.
  3. Secured routes should have some token available in request for checking authorisation. This can be done using some middleware which will check request for protected routes and validate if available token in request is valid.

Middleware function should call next function to execute or move to next middleware.

Middleware function does not end the request-response cycle, it must call next() to pass control to the next middleware function otherwise, the request will be left stuck or hanging in middle.

we use middleware to do pre processing and post processing for request processing.

We can have simple example of Express application. We create an express application which is sending response “hello world” on “/” route request and serving on HTTP Port 3000.

var express = require(‘express’)  
var app = express()app.get(‘/’, function (req, res) {
res.send(‘Hello World!’)
})app.listen(3000)

Middleware function logger example

Here is a simple example of a middleware function called “logger”. This function will be logging message ‘LOGGED’ on each and every request as we have registered this middleware on app instance by doing app.use(middleware).

var myLogger = function (req, res, next) {  
console.log(‘LOGGED’)
next()
}
// this is how we register middleware
app.use(myLogger)

We can add middleware to a route itself instead of doing it on express instance and making that middleware global for each and every request, in this case this middleware will be available for this route only.

We do have global middleware which we register on app instance like body parser which is used to extract req body from coming request from client.

// parse application/x-www-form-urlencoded  
app.use(bodyParser.urlencoded({ extended: false })// parse application/json
app.use(bodyParser.json())Example of middleware registered only to a route.var middleware = function(req,res,next){
next()
}
app.get(‘/’, middleware, function (req, res) {
res.send(‘Hello World!’)
})

Question 49

Can you explain how logging can be done in Node JS application api logging or simple logging in some file

logging is important aspect of node js programming, There are different modules available in node js to add logging in node js application

Logging is needed in Node.js to capture bugs during application runtime.

logging is needed because we want to :

  1. To have a better understanding of how your applications works,
  2. To discover what errors you may encounter runtime.
  3. To find out if your services are running properly without unhandled errors.

winston is important module which you can have for logging purpose. Winston is a multi-transport async logging library for Node.js. When i say multi transport means i can have different medium to manage my logs.

  1. Storing logs on files with custom log format.
  2. Pushing logs on console.
  3. Sending logs to any another TCP channel.
  4. Winston has different log levels and different format can be specified for logging

You can add winston to your project by installing it:

npm install winston — save

Once you have it, you can add winston to your project this way:

const winston = require(‘winston’);  
const logger = new winston.Logger({
transports: [
new winston.transports.File(options.file),
new winston
.transports
.Console({}),
],
exceptionHandlers: [
// new winston.transports.File(options.errorLog)
],
exitOnError: false, // do not exit on handled exceptions
});// create a stream object with a ‘write’ function that will be used by `morgan`
logger.stream = {
write(message, encoding) {
logger.info(message);
}
};
module.exports = logger;

Question 50

How to monitor Node JS process and api transactions by using any tool or library ?

There are many tools available which can provide real time transaction monitoring and logging of apis like keymatrics and newrelic tools which gives you in detail knowledge about node js process status like memory consumption and time taken.

These tools are also being used for performance monitoring of APIs

Newrelic & keymatrics are industry standards and being used in many enterprises to check and monitor Node js process

https://rpm.newrelic.com

http://pm2.io/

Tools for Node js process Monitoring

  1. appdynamics
  2. nodemon, forever modules to run process in background without monitoring.
  3. newRelic gives detailed monitoring statics.
  4. keymetrics is PM2 based tool which shows statics of individual instance and make application highly available. PM2 gives advantages of having multiple instances of application on system so we can use multi-core system efficiently.

Array

var city = [“Mumbai”, “London”, “NewYork”, “Delhi”, “Amsterdam”, “Helsinki”, “Dubai”]

Array

public static void sort(int[] arr, int from_Index, int to_Index)var city = ["Mumbai", "London", "NewYork", "Delhi", "Amsterdam", "Helsinki", "Dubai"]//////ARRAY//https://www.w3schools.com/jsref/jsref_obj_array.asp// Method Description// pop()  Removes the last element of an array 
// push() Adds new elements to the end of an array
// reduce() Reduce the values to a single value (going lt-to-rt)
// reduceRight() Reduce the values 2 a single value(rt-to-left)
// reverse() Reverses the order of the elements in an array
// shift() remove first element of an array,& returns that element
// slice() Selects a part of an array, and returns the new array
// some() Checks if any of the elements in an array pass a test
// sort() Sorts the elements of an array
// splice() Adds/Removes elements from an array
// toString() Converts an array to a string, and returns the result
// unshift() Adds new elements to the beginning of an array, and returns the new length
// valueOf() Returns the primitive value of an array//
concat() Joins >2 arrays, and returns a copy of the joined arrays
// copyWithin() Copies array elements within the array, to and from specified positions
// entries() Returns a key/value pair Array Iteration Object
// every() Checks if every element in an array pass a test
// fill() Fill the elements in an array with a static value
// filter() Creates a new array with every element in an array that pass a test
// find() Returns the value of the first element in an array that pass a test
// findIndex() Returns the index of the first element in an array that pass a test
// forEach() Calls a function for each array element
// from() Creates an array from an object
// includes() Check if an array contains the specified element
// indexOf() Search the array for an element and returns its position
// isArray() Checks whether an object is an array
// join() Joins all elements of an array into a string
// keys() Returns a Array Iteration Object, containing the keys of the original array
// lastIndexOf() Search the array for an element, starting at the end, and returns its position
// map() Creates a new array with the result of calling a function for each array element
String

String

// String charAt() method
var str = "Visit W3Schools!";
// String charCodeAt() method
Search
// var n = str.search("W3Schools"); //6 , check if the value >0 then exit else not
// String concat() method
Include// var n = str.includes("W33"); //true //W33- false
// String indexOf() method
SubString// var res = str.substr(1, 4); //isit
// String lastIndexOf() method
split// var res = str.split(""); V,i,s,i,t, ,W,3,S,c,h,o,o,l,s,!// String search() method// var res = str.split(" "); //remove space -->Visit,W3Schools!
replase// var res = str.replace("Visit","Microsoft"); //Microsoft W3Schools!
// var res = str.repeat(2); //Visit W3Schools! Visit W3Schools!// String substr() method
match// var res = str.match(/oo/g); //oo
// String substring() method
// var n = str.startsWith("Visit"); //true
// String slice() method
// var n = str.endsWith("Visit"); //false
// String toString() method
// String valueOf() method

Learn to code — free 3,000-hour curriculum

#EXPRESS

Interview question

1. Call-back, Promise, async-await , 
2. Load balancing, security, Http, DB, ORMs (Mongoose, Sequalize, TypeORM), Design
3. System Design, Network, System Scaling, Load balancing, Http to https
4. Next.js, Express.js,
5. Event Loop, Oops (JS), Lexical scopes,
6. Function/Block Scopes/ Hoistings, Clouser,this =>>>call, apply and bind,
7. Objects, Prototypes, Class, Datatypes, ==, ===,
8. event loops, callbacks, promises, async/await, Perfomancetypes — Authorization, Content_Type, Accepts

— request, response —

Header types — Authorization, Content_Type, Accepts

— Https Status Code —

20X -> ok/created/done 200(success), 201(ok, created new resource,}   202 {deleted data}30X-> redirection 301(permanently moved), 302-temp move, 304-cache request40X -> client or transport errors, 400(bad request), 401 unauthorized & not provided the auth token & not sighed in, 402(payment required, Goolge API), 403(Forbidden) provided the auth token but you are not authorised & signed in but not authorized., 404(Not Found)50X -> server errors & request correct but server was not able to process it. 500(Internal Server error), 501(not implemented), 502(bad gateway), 503(Service unavailable), 504(Gateway timeout)-->Load balancer- ALB (application load balancer)- Api - (queue)->taskmaster->pool- -->How queue work? (Rabbit, Kafka), service worker, Database>ELB(Edge load balancer- CDN- cloud fare, cloud front, 
eTag->>queues for sync operation, eventual consistency, pub subscribe method
-->Containerization - Docker & K8s, Auto scaling-->>Testing - Mocha, Chai, Jest - test for DB, API, Controller layer

HOSTING

Namesta javascript — Learning

Youtube — Coding Block — iifs and their use in javascript

https://www.youtube.com/watch?v=oxLAqN4noA0&list=PLl4Y2XuUavmufEvZlmluM5eWoer1WkLfz

function printThis(){
console.log(this)
}
printThis()var o = {name:'dk'}let obj = {
a:10,
b:20,
c: printThis
d: function (){
printThis() //object[global]
this.c() //object[global] //check if there is something belfore ., if yes then object will be printed
printThis() //object[global]
console.log(this.c==printThis)
this.c.bind(o)
}
}
ojc.c()let x = obj.c() or obj.c
x() // x is not a function (check (), if there is bracket in obj.c() then x() is a function else x is undefined)
console.log(x===printThis) //true
obj.d() //global will be printed
-----------------------------------------------------
let obj1 :{a:10, b:20}
let obj2 = Object.create(obj1)
console.log(obj2) // ? ->{}
console.log(obj2.a) // ? -> 10
obj1.a = 44
console.log(obj2.a) // ? ->44, obj1.a have changed the main obj2.a

Promise (ES2015)

Promise mend to solve problems with asynchronous calls. They came to solve the call-back hell but introduced the code complexity and syntax

First look this —

  1. https://www.freecodecamp.org/news/javascript-promise-tutorial-how-to-resolve-or-reject-promises-in-js/ (Promise)
  2. https://javascript.info/async-await
  3. https://www.velotio.com/engineering-blog/understanding-node-js-async-flows-parallel-serial-waterfall-and-queues (Ayns-Await)
Promise - 
for async operation we to use promise
let a =undefined //1setTimeout(()=>{
a= "hello world" //3
console.log(a)
},3000)
console.log(a) //2
let prop = new Promise((resolve, reject) => {
resolve(10);
//reject("failed")
});
prop
.then((item) => { console.log("first item", item);
return item * 2; //20})
.then((item) => {console.log("second item ", item); //20
return item * 4; //20*4=80})
.then((item) => { console.log("third item ", item); //80
return item * 6; //80*6 = 480 })
.then((item) => {
console.log("forth item ", item); //480
return item * 6;
})
.catch((err) => console.log(err)) //error
.finally(() => console.log("completed all the process")); //always run used for clean up
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(res=> res.json())
.then(data=>console.log(data))
.catch(err=>console.log(err))
function dataget(value) {
const URL = "https://jsonplaceholder.typicode.com/todos/1"
const first = new Promise((resolve, reject) => {
fetch(URL)
.then((response) =>
resolve(response.json())
);
});
first.then((responsedata) => {
console.log(responsedata);
return responsedata; });
first.then((responsedata) => {
const second = new Promise((resolve, reject) => {
if (responsedata) {
console.log(JSON.stringify(responsedata) + "hello");
return responsedata + "hello"; }
});
});
}
dataget(1);

Asyn-await(ES2017)

  1. Better syntax and cleaner code.
  2. They reduce the boilerplate around the promise,
  3. They build on top of Promise, a higher level of abstraction.
  4. They are a combination of promises and generators
Async/await (javascript.info)async function showAvatar() { // read our JSON 
URL=
"https://jsonplaceholder.typicode.com/posts"
let response = await fetch(URL);
let user = await response.json();
console.log(user[1].id);
// read github user
let githubResponse = await fetch(
`httpts://jsonplaceholder.typicode.com/posts/${user[1].id}/comments` //fix: https
);
let githubUser = await githubResponse.json();
console.log(githubUser);
return githubUser;
}
showAvatar()
.then((item) => {
console.log(item);
})
.catch((err) => console.log("Error in the system", err));

Example Fetch & Axios

EX
fetch('https://pokeapi.co/api/v2/pokemon?limit=50')
.then(res => res.json())
.then(result => console.log(result.results[0].name))
.catch(err=>console.log(err))

Express Explained with Examples — Installation, Routing, Middleware, and Mores

Express

When it comes to building web applications using Node.js, creating a server can take a lot of time. Over the years Node.js has matured enough due to the support from the community. Using Node.js as a backend for web applications and websites helps the developers to start working on their application or product quickly.

In this tutorial, we are going to look into Express which is a Node.js framework for web development that comes with features like routing and rendering and support for REST APIs.

What is Express?

Express is the most popular Node.js framework because it requires minimal setup to start an application or an API and is fast, and unopinionated at the same time. In other words, not complex, unlike Rails and Django. Its flexibility can be calculated by the number of npm modules available which makes it pluggable at the same time. If you have basic knowledge of HTML, CSS, and JavaScript and how Node.js works in general, in no time you will be able to get started with Express.

Express was developed by TJ Holowaychuk and is now maintained by Node.js foundation and open source developers. To get started with the development using Express, you need to have Node.js and npm installed. You can install Node.js on your local machine and along with it comes the command line utility npm that will help us to install plugins or as called dependencies later on in our project.

To check if everything is installed correctly, please open your terminal and type:

node --version
v5.0.0
npm --version
3.5.2

If you are getting the version number instead of an error that means you have installed Node.js and npm successfully.

Why use Express?

Before we start with mechanism of using Express as the backend framework, let us first explore why we should consider it using or the reasons for its as back end popularity.

  • Express lets you build single page, multi-page, and hybrid web and mobile applications. Other common backend use is to provide an API for a client (whether web or mobile).
  • It comes with a default template engine, Jade which helps to facilitate the flow of data into a website structure and does support other template engines.
  • It supports MVC (Model-View-Controller), a very common architecture to design web applications.
  • It is cross-platform and is not limited to any particular operating system.
  • It leverages upon Node.js Learn to code — free 3,000-hour curriculum
  • JANUARY 31, 2020/#EXPRESS

Disadvantage Express?

Not good for CPU intensive work.

Archeture :

request comming though the webworker, form, these send to clinet to db update, once request hits the server, it goese into “Event queue” (This is threading proces) than then it picks up by the “Event loop” one at a time and does the process like computation, DB, and file System. Node internally maintain the thread with the help to “Thread pool”.

Work FLow

Advantage :

  1. Handle multiple clinet request fast and easyly.
  2. No need to create multiple thresds due to thread pool
  3. Node.js utlized less resource and momory.

Proccess 1(msword) → Memory →to talk they use the IPC

NODE IS SiINGLE THREAD**????? ***WHEN NOT?

All the js, v8 and event loop run in single thread called main thread. All the synchronous code run on C++ backed main thread, but some asychronous call some time run on the main tread and sometime not.

Event Loop —

Central dispath that routes request to C++ and results back to Javascript. It have sync/aync request and deside where to run, It mange every thing once complete then goese back to request

JS _

Creating package.json

A JSON (JavaScript Object Notation) file is contains every information about any Express project. The number of modules installed, the name of the project, the version, and other meta information.

mkdir express-app-example
cd express-app-example
npm init --yes
npm i express
npm install - help to install all the api

Npm, install the 3rd party library.

npm init — y help to installize the node project.

This will generate a package.json file in the root of the project directory. T

{
"name": "express-web-app",
"version": "0.1.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"license": "MIT"
}

Installing Express

npm install --save express

We can confirm that Express has correctly installed by two ways. First, there will be new section in package.json file named dependencies under which our Express exists:

{
"name": "express-web-app",
"version": "0.1.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"license": "MIT",
"dependencies": {
"express": "4.16.0"
}
}

Second way is that a new folder called node_modules suddenly appeared in the root of our project directory. This folder stores the packages we install locally in our project.

Building a Server with Express

To use our installed package for Express framework and create a simple server application, we will create the file, index.js, at the root of our project’s directory.

***Require > help to define the file level scope, if we have var x in file1.js and file2.js in node this will work due to require since it create diffrent module and each have there own scope. Second use is module

const express = require('express'); //go mode_module & get express 
const app = express();
app.get('/', (req, res) => res.send('Hello World!'));
app.get('/', (req, res) => {
res.status(200).json({ sucess:true });
});
app.listen(3000, () => console.log('Example app listening on port 3000!'));++++++++++++++++++++++++Foo.jsaddUser function(){
console.log('user Added')
}
deleteUser function(){
console.log('user deleted')
}
module.exports = {addUser, deleteUser}
=======================================
const func = require("./Foo.js")
fun();

To start the server, go to your terminal and type:

node index.jsor 
1) change in scripts "start":"node server.js"
2) npm start

Before we start using Express, we need to define an instance of it which handles the request and response from the server to the client. In our case, it is the variable app.

app.get() is a function that tells the server what to do when a get request at the given route is called. It has a callback function (req, res) that listen to the incoming request req object and respond accordingly using res response object. Both req and res are made available to us by the Express framework.

The req object represents the HTTP request and has properties for the request query string, parameters, body, and HTTP headers. The res object represents the HTTP response that an Express app sends when it gets an HTTP request. In our case, we are sending a text Hello World whenever a request is made to the route /.

Lastly, app.listen() is the function that starts a port and host, in our case the localhost for the connections to listen to incoming requests from a client. We can define the port number such as 3000.

Request Parameters:

req.body — used in case of post request, by default express don't have the body,to fix this we have to use “body-parser’’

all these above ‘use’ are middleware,

req.params — to get the parameters

Anatomy of an Express Application

A typical structure of an Express server file will most likely contain the following parts:

Dependencies : Importing the dependencies such as the express itself. These dependencies are installed using npm like we did in the previous example.

Instantiations: These are the statements to create an object. To use express, we have to instantiate the app variable from it.

Configurations: These statements are the custom application based settings that are defined after the instantiations or defined in a separate file (more on this when discuss the project structure) and required in our main server file.

Middleware: These functions determine the flow of request-response cycle. They are executred after every incoming request. We can also define custom middleware functions. We have section on them below.

Routes: They are the endpoints defined in our server that helps to perform operations for a particular client request.

Bootstrapping Server: The last that gets executed in an Express server is the app.listen() function which starts our server.

We will now start disussing sections that we haven’t previously discussed about.

Routing

Routing refers to how an server side application responds to a client request to a particular endpoint. This endpoint consists of a URI (a path such as / or /books) and an HTTP method such as GET, POST, PUT, DELETE, etc.

Routes can be either good old web pages or REST API endpoints. In both cases the syntax is similar syntax for a route can be defined as:

app.METHOD(PATH, HANDLER);

Routers are helpful in separating concerns such as different endpoints and keep relevant portions of the source code together. They help in building maintainable code. All routes are defined before the function call of app.listen(). In a typical Express application, app.listen() will be last function to execute.

Routing Methods

HTTP is a standard protocol for a client and a server to communicate over. It provides different methods for a client to make request. Each route has at least on hanlder function or a callback. This callback function determines what will be the response from server for that particular route. For example, a route of app.get() is used to handle GET requests and in return send simple message as a response.

// GET method route
app.get('/', (req, res) => res.send('Hello World!'));

Routing Paths

A routing path is a combination of a request method to define the endpoints at which requests can be made by a client. Route paths can be strings, string patterns, or regular expressions.

Let us define two more endpoints in our server based application.

app.get('/home', (req, res) => {
res.send('Home Page');
});
app.get('/about', (req, res) => {
res.send('About');
});

Consider the above code as a bare minimum website which has two endpoints, /home and /about. If a client makes a request for home page, it will only response with Home Page and on /about it will send the response: About Page. We are using the res.send function to send the string back to the client if any one of the two routes defined is selected.

Routing Parameters

Route parameters are named URL segments that are used to capture the values specified at their position in the URL. req.params object is used in this case because it has access to all the parameters passed in the url.

app.get('/books/:bookId', (req, res) => {
res.send(req.params);
});

The request URL from client in above source code will be http://localhost:3000/books/23. The name of route parameters must be made up of characters ([A-Za-z0-9_]). A very general use case of a routing parameter in our application is to have 404 route.

// For invalid routes
app.get('*', (req, res) => {
res.send('404! This is an invalid URL.');
});

If we now start the server from command line using node index.js and try visiting the URL: http://localhost:3000/abcd. In response, we will get the 404 message.

Middleware Functions

Middleware functions are those functions that have access to the request object (req), the response object (res), and the next function in the application’s request-response cycle. The objective of these functions is to modify request and response objects for tasks like parsing request bodies, adding response headers, make other changes to request-response cycle, end the request-response cycle and call the next middleware function.

The next function is a function in the Express router which is used to execute the other middleware functions succeeding the current middleware. If a middleware function does include next() that means the request-response cycle is ended there. The name of the function next() here is totally arbitary and you can name it whatever you like but is important to stick to best practices and try to follow a few conventions, especially if you are working with other developers.

Also, when writing a custom middleware do not forget to add next() function to it. If you do not mention next() the request-response cycle will hang in middle of nowhere and you servr might cause the client to time out.

Let use create a custom middleware function to grasp the understanding of this concept. Take this code for example:

const express = require('express');
const app = express();
// Simple request time logger
app.use((req, res, next) => {
console.log("A new request received at " + Date.now());
// This function call tells that more processing is
// required for the current request and is in the next middleware
function/route handler.
next();
});
app.get('/home', (req, res) => {
res.send('Home Page');
});
app.get('/about', (req, res) => {
res.send('About Page');
});
app.listen(3000, () => console.log('Example app listening on port 3000!'));

if app.use(logger) is called after 13, it will never be called if the next() is not written in the function app.get(), middleware can also help to pass value to next function.

after function there is no chain and last function

MIDDLEWARE ERROR HANDLING —

  1. 4 PARAMETER MEANS EXPRESS MIDDLEWARE
  2. 3 PARAMETER NORMAL MIDDLEWARE

middleware will register the function to be called first and since its chain, it will always call the next function to go to the next phase.

ERROR CATCH ANY TIME

To setup any middleware, whether a custom or available as an npm module, we use app.use() function. It as one optional parameter path and one mandatory parameter callback. In our case, we are not using the optional paramaeter path.

app.use((req, res, next) => {
console.log('A new request received at ' + Date.now());
next();
});

The above middleware function is called for every request made by the client. When running the server you will notice, for the every browser request on the endpoint /, you will be prompt with a message in your terminal:

A new request received at 1467267512545

Middleware functions can be used for a specific route. See the example below:

const express = require('express');
const app = express();
//Simple request time logger for a specific route
app.use('/home', (req, res, next) => {
console.log('A new request received at ' + Date.now());
next();
});
app.get('/home', (req, res) => {
res.send('Home Page');
});
app.get('/about', (req, res) => {
res.send('About Page');
});
app.listen(3000, () => console.log('Example app listening on port 3000!'));

This time, you will only see a similar prompt when the client request the endpoint /home since the route is mentioned in app.use(). Nothing will be shown in the terminal when the client requests endpoint /about.

Order of middleware functions is important since they define when to call which middleware function. In our above example, if we define the route app.get('/home')... before the middleware app.use('/home')..., the middleware function will not be invoked.

Third-Party Middleware Functions

Middleware functions are useful pattern that allows developers to reuse code within their applications and even share it with others in the form of NPM modules. The essential definition of middleware is a function with three arguments: request (or req), response (res), and next which we observer in the previous section.

Often in our Express based server application, we will be using third party middleware functions. These functions are provided by Express itself. They are like plugins that can be installed using npm and this is why Express is flexible.

Some of the most commonly used middleware functions in an Express appication are:

body-parser

It allows developers to process incoming data, such as body payload. The payload is just the data we are receiving from the client to be processed on. Most useful with POST methods. It is installed using:

npm install --save body-parser

Usage:

const bodyParser = require('body-parser');// To parse URL encoded data
app.use(bodyParser.urlencoded({ extended: false }));
// To parse json data
app.use(bodyParser.json());

It is probably one of the most used third-party middleware function in any Express applicaiton.

cookie-Parser

It parses the Cookie header and populates req.cookies with an object keyed by cookie names. To install it,

$ npm install --save cookie-parserconst cookieParser = require('cookie-parser');
app.use(cookieParser());

session

This middleware function creates a session middleware with given options. A session is often used in applications such as login/signup.

$ npm install --save sessionapp.use(
session({
secret: 'arbitary-string',
resave: false,
saveUninitialized: true,
cookie: { secure: true }
})
);

morgan

The morgan middleware keeps track of all the requests and other important information depending on the output format specified.

npm install --save morganconst logger = require('morgan');
// ... Configurations
app.use(logger('common'));

common is a predefined format case which you can use in the application. There are other predefined formats such as tiny and dev, but you can define you own custom format too using the string parameters that are available to us by morgan.

A list of most used middleware functions is available at this link.

Serving Static Files

To serve static files such as CSS stylesheets, images, etc. Express provides a built in middleware function express.static. Static files are those files that a client downloads from a server.

It is the only middleware function that comes with Express framework and we can use it directly in our application. All other middlewares are third party.

By default, Express does not allow to serve static files. We have to use this middleware function. A common practice in the development of a web application is to store all static files under the ‘public’ directory in the root of a project. We can serve this folder to serve static files include by writing in our index.js file:

app.use(express.static('public'));

Now, the static files in our public directory will be loaded.

http://localhost:3000/css/style.css
http://localhost:3000/images/logo.png
http://localhost:3000/images/bg.png
http://localhost:3000/index.html

Multiple Static Directories

To use multiple static assets directories, call the express.static middleware function multiple times:

app.use(express.static('public'));
app.use(express.static('files'));

Virtual Path Prefix

A fix path prefix can also be provided as the first argument to the express.static middleware function. This is known as a Virtual Path Prefix since the actual path does not exist in project.

app.use('/static', express.static('public'));

If we now try to load the files:

http://localhost:3000/static/css/style.css
http://localhost:3000/static/images/logo.png
http://localhost:3000/static/images/bg.png
http://localhost:3000/static/index.html

This technique comes in handy when providing multiple directories to serve static files. The prefixes are used to help distinguish between the multiple directories.

jwt :

WITH NEXT BOTH THE LOG AND RESPONSE DATE WILL BE COMMING

Template Engines

Template engines are libraries that allow us to use different template languages. A template language is a special set of instructions (syntax and control structures) that instructs the engine how to process data. Using a template engine is easy with Express. The popular template engines such as Pug, EJS, Swig, and Handlebars are compatible with Express. However, Express comes with a default template engine, Jade, which is the first released version of Pug.

To demonstrate how to use a Template Engine, we will be using Pug. It is a powerful template engine that provide features such as filters, includes, interpolation, etc. To use it, we have to first install as a module in our project using npm.

npm install --save pug

This command will install the pug and to verify that installed correctly, just take a look at the package.json file. To use it with our application first we have to set it as the template engine and create a new directory ‘./views’ where we will store all the files related to our template engine.

app.set('view engine', 'pug');
app.set('views', './views');

Since we are using app.set() which indicates configuration within our server file, we must place them before we define any route or a middleware function.

In the views direcotry, create file called index.pug.

doctype html
html
head
tite="Hello from Pug"
body
p.greetings Hello World!

To run this page, we will add the following route to our application.

app.get('/hello', (req, res) => {
res.render('index');
});

Since we have already set Pug as our template engine, in res.render we do not have to provide .pug extension. This function renders the code in any .pug file to HTML for the client to display. The browsers can only render HTML files. If you start the server now, and visit the route http://localhost:3000/hello you will see the output Hello World rendered correctly.

In Pug, you must notice that we do not have to write closing tags to elements as we do in HTML. The above code will be rendered into HTML as:

<!DOCTYPE html>
<html>
<head>
<title>Hello from Pug</title>
</head>
<body>
<p class = "greetings">Hello World!</p>
</body>
</html>

The advantage of using a Template Engine over raw HTML files is that they provide support for performing tasks over data. HTML cannot render data directly. Frameworks like Angular and React share this behaviour with template engines.

You can also pass values to template engine directly from the route handler function.

app.get('/', (req, res) => {
res.render('index', { title: 'Hello from Pug', message: 'Hello World!' });
});

For above case, our index.pug file will be written as:

doctype html
html
head
title= title
body
h1= message

The output will be the same as previous case.

Project Structure of an Express App

Since Express does not enforces much on the developer using it, sometimes it can get a bit overwhelming to what project structure one should follow. It does not has a defined structure officially but most common use case that any Node.js based application follows is to separate different tasks in different modules. This means to have separate JavaScript files.

Let us go through a typical strucutre of an Express based web application.

project-root/
node_modules/ // This is where the packages installed are stored
config/
db.js // Database connection and configuration
credentials.js // Passwords/API keys for external services used by your app
config.js // Environment variables
models/ // For mongoose schemas
books.js
things.js
routes/ // All routes for different entities in different files
books.js
things.js
views/
index.pug
404.pug
...
public/ // All static files
images/
css/
javascript/
app.js
routes.js // Require all routes in this and then require this file in
app.js
package.json

This is pattern is commonly known as MVC, model-view-controller. Simply because our database model, the UI of the application and the controllers (in our case, routes) are written and stored in separate files. This design pattern that makes any web application easy to scale if you want to introduce more routes or static files in the future and the code is maintainable.

Node.js is a great runtime environment — and here’s why you should use it

An introduction to the scalable, extensible, easily available, self-sufficient, and highly effective runtime environment

Node.js is a cross-platform runtime environment for JavaScript, which is free and open-sourced. It is full-stack, so it can be used to develop both the client-side and the server-side of an application.

Who uses Node.js? Node.js is a popular tech stack choice for the companies developing online games, instant messengers, social media platforms, or video conferencing tools. It is perfectly suitable for real-time applications, which need the app data to be constantly updated.

Before I start listing the advantages of Node.js, I need to explain something. There is some terminology to clarify for all of us to be on the same page. If you are aware of these concepts, feel free to scroll them past down.

Google’s V8 engine is the engine that Node.js is implemented with. Initially, it was developed by Google and for Google. V8 was written in C++ and aimed to compile JS functions into machine code. The same engine is used by Google Chrome. It is known for impressively high speeds and constantly improved performance.

Event-based model stands for the process of detecting events as soon as they take place and dealing with them respectively. You can use Promises, Async/Await and callbacks for handling events. For example, this snippet presents handling the writing of csv files using the Promise event model.

const createCsvWriter = require('csv-writer').createObjectCsvWriter;
const path = ‘logs.csv”;
const header = [
{
id: 'id',
title: 'id’
},
{
id: 'message',
title: 'message'
},
{
id: 'timestamp',
title: 'timestamp'
}
];
const data = [
{ 'id': 0, 'message': 'message1', 'timestamp': 'localtime1' },
{ 'id': 1, 'message': 'message2', 'timestamp': 'localtime2' },
{ 'id': 2, 'message': 'message3', 'timestamp': 'localtime3' }
];
const csvWriter = createCsvWriter({ path, header });
csvWriter .writeRecords(data) .then(
()=> console.log('The CSV file was written successfully!')
) .catch(
err => console.error("Error: ", err)
);

Non-blocking Input/Output request handling is the way Node.js processes requests. Usually, code is executed sequentially. A request cannot be processed until the previous one is finished. In the non-blocking model, requests do not have to wait in a line. This way, the single threading in Node.js is the most effective, the request processing is concurring, and the response time is short.

npm is a Node.js package manager and an open marketplace for various JS tools. It is the largest software registry in the world. Currently, it features over 836,000 libraries.

So, why Node.js development? Let’s see what the benefits of Node.js are.

JavaScript

Node.js is JavaScript-based. JavaScript is one of the most popular and simplest coding languages in the IT world. It is easy to learn for beginning developers. Even people without the knowledge of JavaScript but with some basic technical background can read and understand the code.

More than that, the pool of JavaScript talents is large, so as a business owner, you have full freedom to choose the team to work with.

Scalability

Node.js applications are easily scalable both horizontally and vertically. Horizontally, new nodes are easily added to the existing system. Vertically, additional resources can be easily added to the existing nodes.

When developing an application with Node.js, you do not have to create a large monolithic core. Instead, you can develop a set of modules and microservices, each running in its own process. All these small services communicate with lightweight mechanisms and comprise your application. Adding an extra microservice is as simple as it can get. This way, the development process becomes much more flexible.

Extensibility

Among other advantages of Node.js, there is the opportunity to integrate it with a variety of useful tools. Node.js can be easily customized and extended.

It can be extended with built-in APIs for the development of HTTP or DNS servers. To facilitate front-end development with old versions of Node or browser, Node.js can be integrated with a JS compiler Babel.

For unit-testing, it works perfectly with, for example, Jasmine. For deployment monitoring and troubleshooting purposes, it works well with Log.io.

Such tools as Migrat, PM2, and Webpack can be used for data migration, process management, and module bundling respectively. In addition, Node.js is expanded with such frameworks as Express, Hapi, Meteor, Koa, Fastify, Nest, Restify and plenty of others.

Availability

Node.js is open-source. The creator has granted everyone a right to learn, develop, and distribute the technology for any purpose. The Node.js environment is one hundred percent free. Ready-made modules, libs, and code samples are open-sourced, so you can configure your application easily and for free. The ability to learn to work with Node.js as also available to everyone willing to acquire this technology.

Self-Sufficiency

There are a lot of convenient repositories with various ready-made modules. The default package manager npm also offers a variety of additional libraries and tools. These significantly facilitate the development process.

Also, Node.js technology can be used to develop both front-end and back-end with the same language. You can work with the same team until the final product is implemented. It simplifies communication and spares you plenty of organizational tasks.

You can even use Node.js as a platform for Machine Learning and Artificial Intelligence training.

const tf = require('@tensorflow/tfjs-node');
const trainData = [
{ input: [-120, -100, -60, -40, -60, -80, -80, -60, -40, -60, -80, -100].map(value => Math.abs(value)), output: [1]},
{ input: [-82, -63, -45, -55, -77, -98, -122, -90, -55, -44, -61, -78].map(value => Math.abs(value)), output: [0]},
.
.
.
{ input: [-80, -60, -40, -60, -80, -100, -120, -100, -60, -40, -60, -80].map(value => Math.abs(value)), output: [0]},
];
const model = tf.sequential();
model.add(tf.layers.dense({inputShape: [12], units: 12, activation: 'sigmoid'})); model.add(tf.layers.dense({units: 1, activation: 'sigmoid'}));
const preparedData = tf.tidy(() => {
tf.util.shuffle(arr);
const inputs = arr.map(d => d.input)
const outputs = arr.map(d => d.output);
const inputTensor = tf.tensor2d(inputs, [arr.length, arr[0].input.length]);
const labelTensor = tf.tensor2d(outputs, [arr.length, 1]);
const inputMax = inputTensor.max();
const inputMin = inputTensor.min();
const labelMax = labelTensor.max();
const labelMin = labelTensor.min();
const normalizedInputs = inputTensor.sub(inputMin).div(inputMax.sub(inputMin));
const normalizedOutputs = labelTensor
return {
inputs: normalizedInputs,
outputs: normalizedOutputs,
inputMax,
inputMin,
labelMax,
labelMin, }
});
model.compile({
optimizer: tf.train.adam(),
loss: tf.losses.meanSquaredError,
metrics: ['mse'],
});
const batchSize = 32;
const epochs = 50;
const trainedModel = model.fit(inputs, outputs, { batchSize, epochs, shuffle: true, });

Universality

Node.js is cross-platform. For instance, a Node.js developer can create a cross-platform desktop application for Windows, Linux, and Mac. What is more, Node.js is not only for mobile, desktop, and web development. The advantages of Node.js are actively applied in the development of cloud or IoT solutions.

Simplicity

Node.js has a low entry threshold. It is quite simple to acquire for the people with knowledge of JavaScript. It is also necessary to point out that the low entry threshold directly translates in an overly large number of low-quality specialists.

Automation

Node.js grants the opportunity to automate repetitive operations, schedule actions, or share modification records. Node.js automatically groups functions and keeps your code in order. Furthermore, there is an extensive built-in library of UI templates or ready-to-go functionality.

High Performance, Speed, and Resource-Efficiency

In Node.js, the JavaScript code is interpreted with the help of Google’s V8 JS engine. Google invests heavily in its engine, so the performance is constantly improved.

Node.js executes code outside a web browser, which greatly improves the performance and resource-efficiency of an application. Also, it allows using features that are not available for the browser, such as a direct file system API, TCP sockets etc.

The code execution is speedy and several requests can be processed simultaneously since Node.js runtime environment supports the non-blocking event-driven input/output operations. Node.js also offers the feature of single module caching, which allows the application to load faster and be more responsive.

Community Support

Among the advantages of using Node.js, developers mention the global developer community. There is an immense number of active developers who contribute to open-source, develop and support the framework, and share their learning insights or coding experience with others.

Node.js is well-supported on GitHub, and it is more popular there than, for example, React. Moreover, such companies as IBM, PayPal, eBay, Microsoft, Netflix, Yahoo!, LinkedIn, or NASA support and actively use Node.js.

However…

It would not be fair to list only the benefits of Node.js without mentioning the drawbacks of Node.js. Presenting a one-sided point of view is not a healthy practice. I want you to understand that no solution is perfect, and Node.js is no exception.

Repositories are extended, but sometimes, they resemble a landfill. There are a lot of unnecessary, overly complicated, or incomprehensible modules. The language has some confusing features, which are difficult to understand. Some modern libs and frameworks are overloaded. My takeaway is as follows: measure is a treasure. If you know well what you are working with and how to do it best, Node,js is the tool you need. Why do we use Node js? Because there are a lot of useful features, the code is easy to understand, and the solutions can be effective. Otherwise — oh well.

fetch-API-tutorial — GET, POST, PUT, PATCH, and DELETE

A guide to using JavaScript’s simple Fetch API interface with typicode’s JSON placeholder as the fake API to play with.

GET, POST, PUT, PATCH, and DELETE are five most common HTTP methods for retrieving from and sending data to a server.

We will be using this API for demonstrations, with credits to typicode on GitHub: https://jsonplaceholder.typicode.com/todos We’ll also get our hands dirty by using JavaScript’s Fetch API to make requests to the API. Fetch API is JavaScript’s super simple built-in interface for making requests to servers. (Gone are the days of importing other interfaces!)

https://medium.com/@9cv9official/what-are-get-post-put-patch-delete-a-walkthrough-with-javascripts-fetch-api-17be31755d28

https://assertible.com/blog/7-http-methods-every-web-developer-should-know-and-how-to-test-them (new)

The GET method

The GET method is used to retrieve data from the server.

// GET retrieve all to-do's
fetch('https://jsonplaceholder.typicode.com/todos')
.then(response => response.json())
.then(json => console.log(json))
// will return all resources
// GET retrieves the to-do with specific URI (in this case id = 5)
fetch('https://jsonplaceholder.typicode.com/todos/5')
.then(response => response.json())
.then(json => console.log(json))
/* will return this specific resource:
"userId": 1,
"id": 5,
"title": "laboriosam mollitia .....quia provident illum",
"completed": false
*/

The POST method

The POST method sends data to the server and creates a new resource. In short, this method is used to create a new data entry.

A POST method with Fetch API looks like this:

// POST adds a random id to the object sent
fetch('https://jsonplaceholder.typicode.com/todos', {
method: 'POST',
body: JSON.stringify({
userId: 1,
title: "clean room",
completed: false
}),
headers: {
"Content-type": "application/json; charset=UTF-8"
})
.then(response => response.json())
.then(json => console.log(json))
/* will return
"userId": 1,
"title": "clean room",
"completed": false,
"id": 201
*/

Note that we needed to pass in the request method, body, and headers. We did not pass these in earlier for the GET method because by default these fields are configured for the GET request, but we need to specify them for all other types of requests. In the body, we assign values to the resource’s properties, stringified. Note that we do not need to assign a URI — the API will do that for us. As you can see from the response, the API assigns an id of 201 to the newly created resource. (Note: The server we are using is a placeholder service, so the server is just simulating the correct responses. No actual change is being done to the API, so don’t be confused if you head to https://jsonplaceholder.typicode.com/todos but do not find the new resource added.)

The PUT method (update)

The PUT method is most often used to update an existing resource. If you want to update a specific resource (which comes with a specific URI), you can call the PUT method to that resource URI with the request body containing the complete new version of the resource you are trying to update.

Let’s try it

Here is a put method that is requesting a change of the name of the task with id 5:

// PUT to the resource with id = 5 to change the name of task
fetch('https://jsonplaceholder.typicode.com/todos/5', {
method: 'PUT',
body: JSON.stringify({
userId: 1,
id: 5,
title: "hello task",
completed: false
}),
headers: {
"Content-type": "application/json; charset=UTF-8"

})
.then(response => response.json())
.then(json => console.log(json))
/* will return

"userId": 1,
"id": 5,
"title": "hello task",
"completed": false

*/

Once again, append this code to your index.html file and observe the network changes. Notice that we need to specify the method as PUT and we need to stringify the JSON object that we passed into the body. Note that the request URL is specifically the resource we want to change and the body contains all of the resource’s property, whether or not all properties need to be changed. The response will be the new version of the resource. (Note again that it is a simulated response.)

The PATCH method (update)

The PATCH method is very similar to the PUT method because it also modifies an existing resource. The difference is that for the PUT method, the request body contains the complete new version, whereas for the PATCH method, the request body only needs to contain the specific changes to the resource, specifically a set of instructions describing how that resource should be changed, and the API service will create a new version according to that instruction.

Let’s try it

// PATCH to the resource id = 1
// update that task is completed
fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'PATCH',
body: JSON.stringify({
completed: true
}),
headers: {
"Content-type": "application/json; charset=UTF-8"

})
.then(response => response.json())
.then(json => console.log(json))
/* will return

"userId": 1,
"id": 1,
"title": "delectus aut autem",
"completed": true

*/

As you can see here, the request is very similar to the PUT request, but the body of the request contains only the property of the resource that needs to be changed. The response is the new version of the resource.

The DELETE method

The DELETE method is used to delete a resource specified by it’s URI.

Let’s try it

The DELETE request simply looks like this, for deleting a specific resource:

// DELETE task with id = 1
fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'DELETE'
})
// empty response: {}

INTERVIEW QUESTION

Design pattern

Singleton

Singleton pattern restricts the instantiation of a class and ensures that only one instance of the class exists in lifecycle.

Factory

It comes under creational pattern as this pattern provides one of the best ways to create an object by calling a factory method.
In Factory pattern, we create object without exposing the creation logic to the client and refer to newly created object using a common interface.

Command

The Command pattern encapsulates actions as objects.
Good example

  • four terms always associated with the command pattern are command, receiver, invoker and client.

Architecture Approach

Microservices: Choose a microservices architecture by considering the below points:

  • Easy to scale out horizontally based on the demand.
  • System resilience. We can achieve high availability.
  • There is a scope to extend/replace the functionality without a major impact on the entire system.

The above system is divided into the following services, based on the microservices architecture paradigm

  • Product Catalog Search Service
  • User Registration service
  • User Login Service
  • Shopping Cart Service
  • Product Detail Page Service
  • Checkout Service

Ecommerce Backend System

CSS

  1. Different inline-block, block
  2. 2 Ways of Vertical Alignment
  3. box-sizing: width+padding+border (box-sizing: border-box)
  4. background-size: cover/content

Course Curriculum

1 Introduction

2 Preview

3 Course Introduction (6:56)

4 Breaking Down the Interview Process The Big Picture (3:40)

5 Preparing Your Resume (6:36)

6 Recruiter Call (2:55)

7 Initial Screening Call (5:34)

8 Coding Project (3:12)

9 Onsite Interview (5:53)

10 Job Offer and Salary Negotiation (2:36)

11 JavaScript Section Overview —

12 JavaScript (3:00)

13 Variable Declarations — var vs let vs const (9:25)

14 Variable Declarations Exercise & Solution (3:29)

15 Understanding Scope (10:08)

16 Closures (10:16)

17 Closures Exercise & Solution (8:12)

18 Function Currying (7:57)

19 this Keyword (11:54)

20 this Keyword Exercise & Solution (5:18)

21 Arrow Functions (5:14)

22 Arrow Functions Exercise & Solution (1:35)

23 Function Currying Exercise and Solution (6:33)

24 Prototype (11:09)

25 More on Prototype (5:33)

26 Prototypal Inheritance (14:27)

27 ES6 class Keyword (4:06)

28 ES6 class Keyword Exercise & Solution (5:05)

29 Map (6:37) 30 Set (3:30)

31 Iterables and Iterators (19:51)

32 Iterables and Iterators Exercise & Solution (7:34)

33 Generators (10:16)

34 Generators Exercise & Solution (3:47)

35 Asynchronous JavaScript (5:47)

36 Timeouts and Intervals (7:38)

37 Timeouts and Intervals Exercise & Solution (7:46)

38 Callbacks (7:30)

39 Promises (Part 1) (15:52)

40 Promises (Part 2) (6:32)

41 async await (10:40)

42 async await Exercise and Solution (1:26)

43 Event Loop — Synchronous Code (7:12)

44 Preview Event Loop — setTimeout (8:47)

45 Event Loop — Promise (7:52)

46 Event Loop — setTimeout with Promise (8:24)

47 Suggested Learning (2:05) 48 Problem Solving Section Overview (2:04) 49 Sum of Numbers 50 Factorial of a Number 51 Fibonacci Sequence Prime Numbers Palindrome Anagram Reverse Words Remove Vowels from a String Palindromic Substrings Array of Fullnames Longest Word in a String Array and Index Union Intersection Difference Flatten Array Duplicate Elements Non Repeating Words Longest Palindrome Longest Substring Group Anagrams Miscellaneous Section Overview (1:29) HTML (13:05) CSS (19:06) React and Redux (9:08) Tooling (2:25)ping Up (1:43)

Sequrity:

Security standards and protocols are guidelines and procedures designed to ensure the integrity, confidentiality, and availability of information system resources. Here are some important ones related to software development:

  1. HTTPS (Hypertext Transfer Protocol Secure): This is a protocol used for secure communication over a computer network, widely used on the Internet. It is the result of layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/TLS protocol, thus adding the security capabilities of SSL/TLS to standard HTTP communications.
  2. SSL/TLS (Secure Sockets Layer/Transport Layer Security): These are cryptographic protocols designed to provide communications security over a computer network. They are used in applications such as web browsing, email, instant messaging, and voice over IP (VoIP). These protocols encrypt the segments of network connections at the Application Layer to ensure secure end-to-end transit at the Transport Layer. SSL was the original version of the protocol, and TLS is the updated version. SSL 2.0 and 3.0 are now deprecated due to security vulnerabilities. TLS 1.2 and 1.3 are the most current and secure versions as of now. These protocols are widely used on the internet. For example, when you visit a website that starts with “https://”, your browser is using SSL/TLS to securely communicate with the website
  3. OAuth: This is an open standard for access delegation, commonly used as a way for Internet users to grant websites or applications access to their information on other websites but without giving them the passwords.
  4. OpenID: This is an open standard and decentralized authentication protocol. It allows users to be authenticated by co-operating sites (known as relying parties, or RP) using a third-party service.
  5. SAML (Security Assertion Markup Language): This is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider.
  6. JWT (JSON Web Token): This is an Internet proposed standard for creating data with optional signature and/or optional encryption whose payload holds JSON that asserts some number of claims.
  7. CORS (Cross-Origin Resource Sharing): This is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated.
  8. Content Security Policy (CSP): This is a computer security standard introduced to prevent cross-site scripting (XSS), clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context.
  9. HSTS (HTTP Strict Transport Security): This is a web security policy mechanism that helps to protect websites against man-in-the-middle attacks such as protocol downgrade attacks and cookie hijacking.

Remember, the choice of security standards and protocols depends on your specific use case and the nature of the data you’re handling.

Unlisted

--

--

Devesh Kr Sri

Full Stack Developer with over 15 years of experince