How to – Work with Git Hooks?


 

What is Git Hooks?

A git hook is a script that git executes before or after a relevant git event or action is triggered.

Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. Git hooks are a built-in feature - no need to download anything.

Where we can use Git Hook?

  • Commit code only if lint and build
  • Commit message format policy
  • Prevent pushes or merges that don’t conform to certain standards or meet guideline expectations
  • Facilitate continuous deployment
  • Connecting issue tracker with commit policy
  • Custom validations for master branch push
  • And many more….

Types of Git Hooks

  • Local Hooks
    • pre-commit: Runs before finishing commit
    • prepare-commit-msg: Provide a default commit message if one is not given.
    • commit-msg: Commit message validation.
    • post-commit: Runs after a successful commit.
    • post-checkout: Runs after every checkout.
    • pre-rebase: Runs before git rebase.
    • post-merge: Runs after a successful merge.
  • Server side Hooks
    • pre-receive
    • update
    • post-receive

How to use it in Node.js based projects?

Using git hooks with any package.json based project is very simple and everyone in a team doesn’t have to modify local hooks files manually.
Two best ways of using centralized git hooks from npm:

Install: npm install husky --save-dev // Edit package.json { "scripts": { "precommit": "npm test", "prepush": "npm test", "...": "..." } } git commit -m "Keep calm and commit" // Existing hooks aren't replaced and you can use any Git hook. // If you're migrating from ghooks, simply run npm uninstall ghooks --save-dev && npm install husky --save-dev and edit package.json. Husky will automatically migrate ghooks hooks.

Install: npm install --save-dev pre-commit { "scripts": { "test": "echo \"Error: I SHOULD FAIL LOLOLOLOLOL \" && exit 1", "foo": "echo \"fooo\" && exit 0", "bar": "echo \"bar\" && exit 0" }, "pre-commit": [ "foo", "bar", "test" ] }

alt tag

How to use Git Hooks with Gulp?

var gulp = require('gulp'); var guppy = require('git-guppy')(gulp); // Then simply define some gulp tasks in your gulpfile.js // whose names match whichever git-hooks you want to be triggerable by git. gulp.task('pre-commit', function () { // see below }); // less contrived example gulp.task('pre-commit', guppy.src('pre-commit', function (filesBeingCommitted) { return gulp.src(filesBeingCommitted) .pipe(gulpFilter(['*.js'])) .pipe(jshint()) .pipe(jshint.reporter(stylish)) .pipe(jshint.reporter('fail')); })); // another contrived example gulp.task('pre-push', guppy.src('pre-push', function (files, extra, cb) { var branch = execSync('git rev-parse --abbrev-ref HEAD'); if (branch === 'master') { cb('Don\'t push master!') } else { cb(); } }));

How to customize hooks manually?

If you want to write your custom script for any hook, here is how you can do it.
  • Open .git/hooks directory under your project directory [cd ./.git/hooks && ls]
  • you will see following sample hooks files

applypatch-msg.sample pre-applypatch.sample pre-commit.sample prepare-commit-msg.sample commit-msg.sample post-update.sample pre-push.sample pre-rebase.sample update.sample

  • The .sample extension prevents them from being run, so to enable them, remove the .sample extension from the script name.
  • Local hooks order of execution

<pre-commit> | <prepare-commit-msg> | <commit-msg> | <post-commit>

  • Server side hooks order of execution

<pre-receive> | <update> | <post-receive>

commit-msg

#!/bin/sh # # Automatically adds branch name and branch description to every commit message. # NAME=$(git branch | grep '*' | sed 's/* //') DESCRIPTION=$(git config branch."$NAME".description) TEXT=$(cat "$1" | sed '/^#.*/d') if [ -n "$TEXT" ] then echo "$NAME"': '$(cat "$1" | sed '/^#.*/d') > "$1" if [ -n "$DESCRIPTION" ] then echo "" >> "$1" echo $DESCRIPTION >> "$1" fi else echo "Aborting commit due to empty commit message." exit 1 fi

pre-commit

# 1. # This is a pre commit hook to automatically run linters (tslint and build) # before a commit can be made. I created this to force myself to run the linters # before commiting. # #!/bin/sh pass=true RED='\033[1;31m' GREEN='\033[0;32m' NC='\033[0m' echo "Running Linters:" # Run tslint and get the output and return code tslint=$(npm run lint) ret_code=$? # If it didn't pass, announce it failed and print the output if [ $ret_code != 0 ]; then printf "\n${RED}tslint failed:${NC}" echo "$tslint\n" pass=false else printf "${GREEN}tslint passed.${NC}\n" fi echo "-----------------------------------" echo "Running Build:" # Run build and get the output and return code build=$(npm run build) ret_code=$? if [ $ret_code != 0 ]; then printf "${RED}Build failed:${NC}" echo "$Build\n" pass=false else printf "${GREEN}Build passed.${NC}\n" fi # If there were no failures, it is good to commit if $pass; then exit 0 fi exit 1

# 2. # This is a pre commit hook to automatically run linters (tslint and stylelint) # before a commit can be made. I created this to force myself to run the linters # before commiting. # #!/bin/sh pass=true RED='\033[1;31m' GREEN='\033[0;32m' NC='\033[0m' echo "Running Linters:" # Run tslint and get the output and return code tslint=$(npm run tslint) ret_code=$? # If it didn't pass, announce it failed and print the output if [ $ret_code != 0 ]; then printf "\n${RED}tslint failed:${NC}" echo "$tslint\n" pass=false else printf "${GREEN}tslint passed.${NC}\n" fi # Run stylelint and get the output and return code stylelint=$(npm run stylelint) ret_code=$? if [ $ret_code != 0 ]; then printf "${RED}stylelint failed:${NC}" echo "$stylelint\n" pass=false else printf "${GREEN}stylelint passed.${NC}\n" fi # If there were no failures, it is good to commit if $pass; then exit 0 fi exit 1

# 3. #!/bin/bash # # This pre-commit hook checks that you havn't left and DONOTCOMMIT tokens in # your code when you go to commit. # # To use this script copy it to .git/hooks/pre-commit and make it executable. # # This is provided just as an example of how to use a pre-commit hook to # catch nasties in your code. # Work out what to diff against, really HEAD will work for any established repository. if git rev-parse --verify HEAD >/dev/null 2>&1 then against=HEAD else # Initial commit: diff against an empty tree object against=4b825dc642cb6eb9a060e54bf8d69288fbee4904 fi diffstr=`git diff --cached $against | grep -e '^\+.*DONOTCOMMIT.*$'` if [[ -n "$diffstr" ]] ; then echo "You have left DONOCOMMIT in your changes, you can't commit until it has been removed." exit 1 fi

Learn more about Git Hooks

Todo

  • More automated scripts to come…
Author
[Bhavin Patel]

LoopBack.io with Node.js (How-to)


Let LoopBack Do It: A Walkthrough of the Node API Framework You’ve Been Dreaming Of

BY JOVAN JOVANOVIC

It’s needless to mention the growing popularity of Node.js for application development. eBay has been running a production Node API service since 2011. PayPal is actively rebuilding their front-end in Node. Walmart’s mobile site has become the biggest Node application, traffic wise. On Thanksgiving weekend in 2014, Walmart servers processed 1.5 billion requests, 70 percent of which were delivered through mobile and powered by Node.js. On the development side, the Node package manager (npm) continues to grow rapidly, recently surpassing 150,000 hosted modules.

While Ruby has Rails and Python has Django, the dominant application development framework for Node has yet to be established. But, there is a powerful contender gaining steam: LoopBack, an open source API framework built by San Mateo, Calif., company StrongLoop. StrongLoop is an important contributor to the latest Node version, not to mention the current maintainers of Express, one of the most popular Node frameworks in existence.

IMAGE: loopback and node

Let’s take a closer look at LoopBack and it’s capabilities by turning everything into practice and building an example application.

What is LoopBack and How Does It Work with Node?

LoopBack is a framework for creating APIs and connecting them with backend data sources. Built on top of Express, it can take a data model definition and easily generate a fully functional end-to-end REST API that can be called by any client.

LoopBack comes with a built-in client, API Explorer. We’ll use this since it makes it easier to see the results of our work, and so that our example can focus on building the API itself.

You will of course need Node installed on your machine to follow along. Get it here. npm comes with it, so you can install the necessary packages easily. Let’s get started.

Create a Skeleton

Our application will manage people who would like to donate gifts, or things they just don’t need anymore, to somebody who might need them. So, the users will be Donors and Receivers. A Donor can create a new gift and see the list of gifts. A Receiver can see the list of gifts from all users, and can claim any that are unclaimed. Of course, we could build Donors and Receivers as separate roles on the same entity (User), but let’s try separating them so we can see how to build relations in LoopBack. The name of this groundbreaking application will be Givesomebody.

Install the StrongLoop command line tools through npm:

$ npm install -g strongloop

Then run LoopBack’s application generator:

$ slc loopback

     _-----_
    |       |    .--------------------------.
    |--(o)--|    |  Let's create a LoopBack |
   `---------´   |       application!       |
    ( _´U`_ )    '--------------------------'
    /___A___\    
     |  ~  |     
   __'.___.'__   
 ´   `  |° ´ Y ` 

? What's the name of your application? Givesomebody

Let’s add a model. Our first model will be called Gift. LoopBack will ask for the data source and base class. Since we haven’t set up the data source yet, we can put db (memory). The base class is an auto-generated model class, and we want to use PersistedModel in this case, as it already contains all the usual CRUD methods for us. Next, LoopBack asks if it should expose the model through REST (yes), and the name of the REST service. Press enter here to use the default, which is simply the plural of the model name (in our case, gifts).

$ slc loopback:model

? Enter the model name: Gift
? Select the data-source to attach Gift to: (Use arrow keys)
❯ db (memory)
? Select model's base class: (Use arrow keys)
  Model
❯ PersistedModel
? Expose Gift via the REST API? (Y/n) Yes
? Custom plural form (used to build REST URL):

Finally, we give the names of properties, their data types, and required/not-required flags. Gift will have nameand description properties:

Let's add some Gift properties now.

Enter an empty property name when done.
? Property name: name
   invoke   loopback:property
? Property type: (Use arrow keys)
❯ string
? Required? (y/N)Yes

Enter an empty property name to indicate you are done defining properties.

The model generator will create two files which define the model in the application’s common/models: gift.json and gift.js. The JSON file specifies all metadata about the entity: properties, relations, validations, roles and method names. The JavaScript file is used to define additional behaviour, and to specify remote hooks to be called before or after certain operations (e.g., create, update, or delete).

The other two model entities will be our Donor and Receiver models. We can create them using the same process, except this time let’s put User as the base class. It will give us some properties like username, password, email out of the box. We can add just name and country, for example, to have a full entity. For the Receiver we want to add the delivery address, too.

Project Structure

Let’s have a look at the generated project structure:

IMAGE: Project Structure

The three main directories are: – /server – Contains node application scripts and configuration files. – /client – Contains .js, .html, .css, and all other static files. – /common – This folder is common to both the server and the client. Model files go here.

Here’s a detailed breakdown of the contents of each directory, taken from the LoopBack documentation:

File or directory
Description
How to access in code

Top-level application directory

package.json
Standard npm package specification. See package.json
N/A

/server directory – Node application files 

server.js
Main application program file.
N/A

config.json
Application settings. See config.json.
app.get('setting-name')

datasources.json
Data source configuration file. See datasources.json. For an example, see Create new data source.
app.datasources['datasource-name']

model-config.json
Model configuration file. See model-config.json. For more information, see Connecting models to data sources.
N/A

middleware.json
Middleware definition file. For more information, see Defining middleware.
N/A

/boot directory
Add scripts to perform initialization and setup. See boot scripts.
Scripts are automatically executed in alphabetical order.

/client directory – client application files

README.md
LoopBack generators create empty README file in markdown format.
N/A

Other
Add your HTML, CSS, client JavaScript files.

/common directory – shared application files

/modelsdirectory
Custom model files:

  • Model definition JSON files, by convention named model-name.json; for example customer.json.
  • Custom model scripts by convention named model-name.js; for example, customer.js.

For more information, see Model definition JSON file andCustomizing models.
Node:
myModel = app.models.myModelName

Build Relationships

In our example, we have a few important relationships to model. A Donor can donate many Gifts, which gives the relation Donor has many Gift. A Receiver can also receive many Gifts, so we also have the relationReceiver has many Gift. On the other side, Gift belongs to Donor, and can also belong to Receiver if the Receiver chooses to accept it. Let’s put this into the language of LoopBack.

$ slc loopback:relation

? Select the model to create the relationship from: Donor
? Relation type: has many
? Choose a model to create a relationship with: Gift
? Enter the property name for the relation: gifts
? Optionally enter a custom foreign key:
? Require a through model? No

Note that there is no through model; we are just holding the reference to the Gift.

If we repeat the above procedure for Receiver, and add two belongs to relations to Gift, we will accomplish our model design on a back end side. LoopBack automatically updates the JSON files for the models to express exactly what we just did through theses simple dialogs:

// common/models/donor.json
  ...
  "relations": {
    "gifts": {
      "type": "hasMany",
      "model": "Gift",
      "foreignKey": ""
    }
  },
  ...

Add a Datasource

Now let’s see how to attach a real datasource to store all of our application data. For the purposes of this example, we will use MongoDB, but LoopBack has modules to connect with Oracle, MySQL, PostgreSQL, Redis and SQL Server.

First, install the connector:

$ npm install --save loopback-connector-mongodb

Then, add a datasource to your project:

$ slc loopback:datasource

? Enter the data-source name: givesomebody
? Select the connector for givesomebody: MongoDB (supported by StrongLoop)

The next step is to configure your datasource in server/datasources.json. Use this configuration for a local MongoDB server:

  ...
  "givesomebody": {
    "name": "givesomebody",
    "connector": "mongodb",
    "host": "localhost",
    "port": 27017,
    "database": "givesomebody",
    "username": "",
    "password": ""
  }
  ...

Finally, open server/model-config.json and change the datasource for all entities we want to persist in the database to "givesomebody".

{
  ...
  "User": {
    "dataSource": "givesomebody"
  },
  "AccessToken": {
    "dataSource": "givesomebody",
    "public": false
  },
  "ACL": {
    "dataSource": "givesomebody",
    "public": false
  },
  "RoleMapping": {
    "dataSource": "givesomebody",
    "public": false
  },
  "Role": {
    "dataSource": "givesomebody",
    "public": false
  },
  "Gift": {
    "dataSource": "givesomebody",
    "public": true
  },
  "Donor": {
    "dataSource": "givesomebody",
    "public": true
  },
  "Receiver": {
    "dataSource": "givesomebody",
    "public": true
  }
}

Testing Your REST API

It’s time to see what we’ve built so far! We’ll use the awesome built-in tool, API Explorer, which can be used as a client for the service we just created. Let’s try testing REST API calls.

In a separate window, start MongoDB with:

$ mongod

Run the application with:

$ node .

In your browser, go to http://localhost:3000/explorer/. You can see your entities with the list of operations available. Try adding one Donor with a POST /Donors call.

IMAGE: Testing Your API 2

IMAGE: Testing Your API 3

API Explorer is very intuitive; select any of the exposed methods, and the corresponding model schema will be displayed in the bottom right corner. In the data text area, it is possible to write a custom HTTP request. Once the request is filled in, click the “Try it out” button, and the server’s response will be displayed below.

IMAGE: Testing Your API 1

User Authentication

As mentioned above, one of the entities that comes pre-built with LoopBack is the User class. User possesses login and logout methods, and can be bound to an AccessToken entity which keeps the token of the specific user. In fact, a complete user authentication system is ready to go out of the box. If we try calling /Donors/login through API Explorer, here is the response we get:

{
  "id": "9Kvp4zc0rTrH7IMMeRGwTNc6IqNxpVfv7D17DEcHHsgcAf9Z36A3CnPpZJ1iGrMS",
  "ttl": 1209600,
  "created": "2015-05-26T01:24:41.561Z",
  "userId": ""
}

The id is actually the value of the AccessToken, generated and persisted in the database automatically. As you see here, it is possible to set an access token and use it for each subsequent request.

IMAGE: User Authentication

Like what you’re reading?

Get the latest updates first.

No spam. Just great engineering and design posts.

Remote Methods

A remote method is a static method of a model, exposed over a custom REST endpoint. Remote methods can be used to perform operations not provided by LoopBack’s standard model REST API.

Beside the CRUD methods that we get out of the box, we can add as many custom methods as we want. All of them should go into the [model].js file. In our case, let’s add a remote method to the Gift model to check if the gift is already reserved, and one to list all gifts that are not reserved.

First, let’s add an additional property to the model called reserved. Just add this to the properties in gift.json:

    ...
    "reserved": {
      "type": "boolean"
    }
    ...

The remote method in gift.js should look something like this:

module.exports = function(Gift) {

    // method which lists all free gifts
    Gift.listFree = function(cb) {
        Gift.find({
            fields: {
                reserved: false
            }
        }, cb);
    };

    // expose the above method through the REST
    Gift.remoteMethod('listFree', {
        returns: {
            arg: 'gifts',
            type: 'array'
        },
        http: {
            path: '/list-free',
            verb: 'get'
        }
    });

    // method to return if the gift is free
    Gift.isFree = function(id, cb) {
        var response;
        Gift.find({
            fields: {
                id: id
            }
        }, function(err, gift) {
            if (err) return cb(err);

            if (gift.reserved)
                response = 'Sorry, the gift is reserved';
            else
                response = 'Great, this gift can be yours';

        });
        cb(null, response);
    };

    // expose the method through REST
    Gift.remoteMethod('isFree', {
        accepts: {
            arg: 'id',
            type: 'number'
        },
        returns: {
            arg: 'response',
            type: 'string'
        },
        http: {
            path: '/free',
            verb: 'post'
        }
    });
};

So to find out if a particular gift is available, the client can now send a POST request to /api/Gifts/free, passing in the id of the gift in question.

Remote Hooks

Sometimes there is a need for execution of some method before or after the remote method. You can define two kinds of remote hooks:

  • beforeRemote() runs before the remote method.
  • afterRemote() runs after the remote method.

In both cases, you provide two arguments: a string that matches the remote method to which you want to “hook” your function, and the callback function. Much of the power of remote hooks is that the string can include wildcards, so it is triggered by any matching method.

In our case, let’s set a hook to print information to the console whenever a new Donor is created. To accomplish this, let’s add a “before create” hook in donor.js:

module.exports = function(Donor) {
    Donor.beforeRemote('create', function(context, donor, next) {
        console.log('Saving new donor with name: ', context.req.body.name);
    
        next();
    });
};

The request is called with the given context, and the next() callback in middleware (discussed below) is called after the hook runs.

Access Controls

LoopBack applications access data through models, so controlling access to data means defining restrictions on models; that is, specifying who or what can read and write the data or execute methods on the models. LoopBack access controls are determined by access control lists, or ACLs.

Let’s allow unlogged-in Donors and Receivers to view gifts, but only logged-in Donors to create and delete them.

$ slc loopback:acl

To begin, let’s deny everyone access to all endpoints.

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: All methods and properties
? Select the access type: All (match all types)
? Select the role: All users
? Select the permission to apply: Explicitly deny access

Next, allow everyone to read from Gift models:

$ slc loopback:acl

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: All methods and properties
? Select the access type: Read
? Select the role: All users
? Select the permission to apply: Explicitly grant access

Then, we want to allow authenticated users to create Gifts:

$ slc loopback:acl

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: A single method
? Enter the method name: create
? Select the role: Any authenticated user
? Select the permission to apply: Explicitly grant access

And finally, let’s allow the owner of the gift to make any changes:

$ slc loopback:acl

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: All methods and properties
? Select the access type: Write
? Select the role: The user owning the object
? Select the permission to apply: Explicitly grant access

Now when we review gift.json, everything should be in place:

"acls": [
  {
    "accessType": "*",
    "principalType": "ROLE",
    "principalId": "$everyone",
    "permission": "DENY"
  },
  {
    "accessType": "READ",
    "principalType": "ROLE",
    "principalId": "$everyone",
    "permission": "ALLOW"
  },
  {
    "accessType": "EXECUTE",
    "principalType": "ROLE",
    "principalId": "$authenticated",
    "permission": "ALLOW",
    "property": "create"
  }
],

One important note here: $authenticated is a predefined role which corresponds to all users in the system (both Donors and Receivers), but we only want to allow Donors to create new Gifts. Therefore, we need a custom role. As Role is one more entity we get out of the box, we can leverage its API call to create the $authenticatedDonor role in the boot function, and then just modify pricipalId in gift.json.

It will be necessary to create a new file, server/boot/script.js, and add the following code:

Role.create({
    name: 'authenticatedDonor'
}, function(err, role) {
    if (err) return debug(err);
})

The RoleMapping entity maps Roles to Users. Be sure that Role and RoleMapping are both exposed through REST. In server/model-config.json, check that "public" is set to true for the Role entity. Then in donor.js, we can write a “before create” hook that will map the userID and roleID in the RoleMapping POST API call.

Middleware

Middleware contains functions that are executed when a request is made to the REST endpoint. As LoopBack is based on Express, it uses Express middleware with one additional concept, called “middleware phases.” Phases are used to clearly define the order in which functions in middleware are called.

Here is the list of predefined phases, as provided in the LoopBack docs:

  1. initial – The first point at which middleware can run.
  2. session – Prepare the session object.
  3. auth – Handle authentication and authorization.
  4. parse – Parse the request body.
  5. routes – HTTP routes implementing your application logic. Middleware registered via the Express API app.use, app.route, app.get (and other HTTP verbs) runs at the beginning of this phase. Use this phase also for sub-apps like loopback/server/middleware/rest or loopback-explorer.
  6. files – Serve static assets (requests are hitting the file system here).
  7. final – Deal with errors and requests for unknown URLs.

Each phase has three subphases. For example, the subphases of the initial phase are:

  1. initial:before
  2. initial
  3. initial:after

Let’s take a quick look on our default middleware.json:

{
  "initial:before": {
    "loopback#favicon": {}
  },
  "initial": {
    "compression": {},
    "cors": {
      "params": {
        "origin": true,
        "credentials": true,
        "maxAge": 86400
      }
    }
  },
  "session": {
  },
  "auth": {
  },
  "parse": {
  },
  "routes": {
  },
  "files": {
  },
  "final": {
    "loopback#urlNotFound": {}
  },
  "final:after": {
    "errorhandler": {}
  }
}

In the initial phase, we call loopback.favicon() (loopback#favicon is the middleware id for that call). Then, third-party npm modules compression and cors are called (with or without parameters). In the final phase, we have two more calls. urlNotFound is a LoopBack call, and errorhandler is third-party module. This example should demonstrate that a lot of built in calls can be used just like the external npm modules. And of course, we can always create our own middleware and call them through this JSON file.

loopback-boot

To wrap up, let’s mention a module which exports the boot() function that initializes the application. In server/server.js you’ll find the following piece of code, which bootstraps the application:

boot(app, __dirname, function(err) {
    if (err) throw err;
  
    // start the server if `$ node server.js`
    if (require.main === module)
        app.start();
});

This script will search the server/boot folder, and load all the scripts it finds there in alphabetical order. Thus, in server/boot, we can specify any script which should be run at start. One example is explorer.js, which runs API Explorer, the client we used for testing our API.

Got the repetition blues? Don’t build that Node API from scratch again. Let LoopBack do it!

Conclusion

Before I leave you, I would like to mention StrongLoop Arc, a graphical UI that can be used as an alternative to slc command line tools. It also includes tools for building, profiling and monitoring Node applications. For those who are not fans of the command line, this is definitely worth trying.

IMAGE: Conclusion

Generally speaking, LoopBack can save you a lot of manual work since you are getting a lot of stuff out of the box. It allows you to focus on application-specific problems and business logic. If your application is based on CRUD operations and manipulating predefined entities, if you are sick of rewriting the user’s authentication and authorization infrastructure when tons of developers have written that before you, or if you want to leverage all the advantages of a great web framework like Express, then building your REST API with LoopBack can make your dreams come true. It’s a piece of cake!

Source:https://www.toptal.com/nodejs/let-loopback-do-it-a-walkthrough-of-the-node-api-framework-you-ve-been-dreaming-of

Happy Coding Smile

What are the most famous web apps built on top of Node.js?


Here are some Node.js apps that are famous for their scale and ridiculous performance.

Walmart switched over to Node.js on a Black Friday, got more than 200 million visitors that day, and never went above 1% CPU.

LinkedIn rewrote their mobile backend in Node.js, and proceeded to get 20 times the performance out of 1/10 the servers.

Groupon increased page load speed by 50% by switching from Ruby on Rails to Node.js. They also reported being able to launch new features much faster than before.

Paypal did an experiment where two teams built identical apps – one in Java and one in Node.js. The Node.js team built theirs in half the time. The Node.js app had response times that were 50% faster than the Java app.
You can read more about these incredible performance gains (and developer productivity gains)

IBM X-Force Exchange:
The backend (API) runs on Node.js in a CloudFoundry environment. This makes it easy to scale the whole thing horizontally and vertically on demand. The backend handles over 700 TB of Threat Intelligence data for thousands of customers: in a single(!) thread. [IBM X-Force Exchange]

Amazon uses node.js for certain services in their backend. Their newest website is also based on Angular.js. It is likely that their frontend is served by a simple node.js webserver instance. [At least you can use node in AWS: Node.js]

Netflix moves (or moved) from Java to Javascript in their backend. [Building With Node.js At Netflix]

​Many companies and projects are switching to Node.js like:

  1. Klout
  2. Koding
  3. Microsoft
  4. PayPal
  5. Yahoo
  6. simplereach.com
  7. Quad
  8. NodePing
  9. linkedin
  10. Flickr
  11. duckduckgo.com

Happy Coding… Smile

using Edge.js to combine node.js with C#


Getting Familiar with Edge.js

To bring .NET and Node.js together, Edge.js has some pre-requisites. It runs on .NET 4.5, so you must have .NET 4.5 installed. As Node.js treats all I/O and Network calls as slower operations, Edge.js assumes that the .NET routine to be called is a slower operation and handles it asynchronously. The .NET function to be called has to be an asynchronous function as well.

The function is assigned to a delegate of type Func<object, Task<object>>. This means, the function is an asynchronous one that can take any type of argument and return any type of value. Edge.js takes care of converting the data from .NET type to JSON type and vice-versa. Because of this process of marshalling and unmarshalling, the .NET objects should not have circular references. Presence of circular references may lead to infinite loops while converting the data from one form to the other.

Hello World using Edge

Edge.js can be added to a Node.js application through NPM. Following is the command to install the package and save it to package.json file:

> npm install edge --save

The edge object can be obtained in a Node.js file as:

var edge = require('edge');

The edge object can accept inline C# code, read code from a .cs or .csx file, and also execute the code from a compiled dll. We will see all of these approaches.

To start with, let’s write a “Hello world” routine inline in C# and call it using edge. Following snippet defines the edge object with inline C# code:

var helloWorld = edge.func(function () {

/*async(input) => {

return "Hurray! Inline C# works with edge.js!!!";

}*/

});

The asynchronous and anonymous C# function passed in the above snippet is compiled dynamically before calling it. The inline code has to be passed as a multiline comment. The method edge.func returns a proxy function that internally calls the C# method. So the C# method is not called till now. Following snippet calls the proxy:

helloWorld(null, function(error, result) {

if (error) {

console.log("Error occured.");

console.log(error);

return;

}

console.log(result);

});

In the above snippet, we are passing a null value to first parameter of the proxy as we are not using the input value. The callback function is similar to any other callback function in Node.js accepting error and result as parameters.

We can rewrite the same Edge.js proxy creation by passing the C# code in the form of a string instead of a multiline comment. Following snippet shows this:

var helloWorld = edge.func(

'async(input) => {'+

'return "Hurray! Inline C# works with edge.js!!!";'+

'}'

);

We can pass a class in the snippet and call a method from the class as well. By convention, name of the class should be Startup and name of the method should be Invoke. The Invoke method will be attached to a delegate of type Func<object, Task<object>>. The following snippet shows usage of class:

var helloFromClass = edge.func(function () {

/*

using System.Threading.Tasks;

public class Startup

{       

public async Task<object> Invoke(object input)

{

return "Hurray! Inline C# class works with edge.js!!!";

}

} */

});

It can be invoked the same way we did previously:

helloFromClass(10, function (error, result) {

if(error){

console.log("error occured...");

console.log(error);

return;

}

console.log(result);

});

A separate C# file

Though it is possible to write the C# code inline, being developers, we always want to keep the code in a separate file for better organization of the code. By convention, this file should have a class called Startup with the method Invoke. The Invoke method will be added to the delegate of type Func<object, Task<object>>.

Following snippet shows content in a separate file, Startup.cs:

using System.Threading.Tasks;

public class Startup

{

public async Task<object> Invoke(object input)

{

return new Person(){

Name="Alex",

Occupation="Software Professional",

Salary=10000,

City="Tokyo"

};

}

}

public class Person{

public string Name { get; set; }

public string Occupation { get; set; }

public double Salary { get; set; }

public string City { get; set; }

}

Performing CRUD Operations on SQL Server

Now that you have a basic idea of how Edge.js works, let’s build a simple application that performs CRUD operations on a SQL Server database using Entity Framework and call this functionality from Node.js. As we will have a considerable amount of code to setup Entity Framework and perform CRUD operations in C#, let’s create a class library and consume it using Edge.js.

Creating Database and Class Library

As a first step, create a new database named EmployeesDB and run the following commands to create the employees table and insert data into it:

CREATE TABLE Employees(

Id INT IDENTITY PRIMARY KEY,

Name VARCHAR(50),

Occupation VARCHAR(20),

Salary INT,

City VARCHAR(50)

);

INSERT INTO Employees VALUES

('Ravi', 'Software Engineer', 10000, 'Hyderabad'),

('Rakesh', 'Accountant', 8000, 'Bangalore'),

('Rashmi', 'Govt Official', 7000, 'Delhi');

Open Visual Studio, create a new class library project named EmployeesCRUD and add a new Entity Data Model to the project pointing to the database created above. To make the process of consuming the dll in Edge.js easier, let’s assign the connection string inline in the constructor of the context class. Following is the constructor of context class that I have in my class library:

public EmployeesModel()

: base("data source=.;initial catalog=EmployeesDB;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework;")

{

}

Add a new class to the project and name it EmployeesOperations.cs. This file will contain the methods to interact with Entity Framework and perform CRUD operations on the table Employees. As a best practice, let’s implement the interface IDisposable in this class and dispose the context object in the Dispose method. Following is the basic setup in this class:

public class EmployeesOperations : IDisposable

{

EmployeesModel context;

public EmployeesOperations()

{

context = new EmployeesModel();

}

public void Dispose()

{

context.Dispose();

}

}

As we will be calling methods of this class directly using Edge.js, the methods have to follow signature of the delegate that we discussed earlier. Following is the method that gets all employees:

public async Task<object> GetEmployees(object input)

{

return await context.Employees.ToListAsync();

}

There is a challenge with the methods performing add and edit operations, as we need to convert the input data from object to Employee type. This conversion is not straight forward, as the object passed into the .NET function is a dynamic expando object. We need to convert the object into a dictionary object and then read the values using property names as keys. Following method performs this conversion before inserting data into the database:

public async Task<object> AddEmployee(object emp)

{

var empAsDictionary = (IDictionary<string, object>)emp;

var employeeToAdd = new Employee() {

Name = (string)empAsDictionary["Name"],

City = (string)empAsDictionary["City"],

Occupation = (string)empAsDictionary["Occupation"],

Salary = (int)empAsDictionary["Salary"]

};

var addedEmployee = context.Employees.Add(employeeToAdd);

await context.SaveChangesAsync();

return addedEmployee;

}

The same rule applies to the edit method as well. It is shown below:

public async Task<object> EditEmployee(object input)

{

var empAsDictionary = (IDictionary<string, object>)input;

var id = (int)empAsDictionary["Id"];

var employeeEntry = context.Employees.SingleOrDefault(e => e.Id == id);

employeeEntry.Name = (string)empAsDictionary["Name"];

employeeEntry.Occupation = (string)empAsDictionary["Occupation"];

employeeEntry.Salary = (int)empAsDictionary["Salary"];

employeeEntry.City = (string)empAsDictionary["City"];

context.Entry(employeeEntry).State = System.Data.Entity.EntityState.Modified

return await context.SaveChangesAsync();

}

We will compose REST APIs using Express.js and call the above functions inside them. Before that, we need to make the compiled dll of the above class library available to the Node.js application. We can do it by building the class library project and copying the result dlls into a folder in the Node.js application.

Creating Node.js Application

Create a new folder in your system and name it ‘NodeEdgeSample’. Create a new folder ‘dlls’ inside it and copy the binaries of the class library project into this folder. You can open this folder using your favorite tool for Node.js. I generally use WebStorm and have started using Visual Studio Code these days.

Add package.json file to this project using “npm init” command (discussed in Understanding NPM article) and add the following dependencies to it:

"dependencies": {

"body-parser": "^1.13.2",

"edge": "^0.10.1",

"express": "^4.13.1"

}

Run NPM install to get these packages installed in the project. Add a new file to the project and name it ‘server.js’. This file will contain all of the Node.js code required for the application. First things first, let’s get references to all the packages and add the required middlewares to the Express.js pipeline. Following snippet does this:

var edge = require('edge');

var express = require('express');

var bodyParser = require('body-parser');

var app = express();

app.use('/', express.static(require('path').join(__dirname, 'scripts')));

app.use(bodyParser.urlencoded({ extended: true }));

app.use(bodyParser.json());

Now, let’s start adding the required Express REST APIs to the application. As already mentioned, the REST endpoints will interact with the compiled dll to achieve their functionality. The dll file can be referred using theedge.func function. If type and method are not specified, it defaults class name as Startup and method name asInvoke. Otherwise, we can override the class and method names using the properties in the object passed intoedge.func.

Following is the REST API that returns list of employees:

app.get('/api/employees', function (request, response) {

var getEmployeesProxy = edge.func({

assemblyFile: 'dlls\\EmployeeCRUD.dll',

typeName: 'EmployeeCRUD.EmployeesOperations',

methodName: 'GetEmployees'

});

getEmployeesProxy(null, apiResponseHandler(request, response));

});

The function apiResponseHandler is a curried generic method for all the three REST APIs. This function returns another function that is called automatically once execution of the .NET function is completed. Following is the definition of this function:

function apiResponseHandler(request, response) {

return function(error, result) {

if (error) {

response.status(500).send({error: error});

return;

}

response.send(result);

};

}

Implementation of REST APIs for add and edit are similar to the one above. The only difference is, they pass an input object to the proxy function.

app.post('/api/employees', function (request, response) {

var addEmployeeProxy = edge.func({

assemblyFile:"dlls\\EmployeeCRUD.dll",

typeName:"EmployeeCRUD.EmployeesOperations",

methodName: "AddEmployee"

});

addEmployeeProxy(request.body, apiResponseHandler(request, response));

});

app.put('/api/employees/:id', function (request, response) {

var editEmployeeProxy = edge.func({

assemblyFile:"dlls\\EmployeeCRUD.dll",

typeName:"EmployeeCRUD.EmployeesOperations",

methodName: "EditEmployee"

});

editEmployeeProxy(request.body, apiResponseHandler(request, response));

});

Consuming APIs on a Page

The final part of this tutorial is to consume these APIs on an HTML page. Add a new HTML page to the application and add bootstrap CSS and Angular.js to this file. This page will list all the employees and provide interfaces to add new employee and edit details of an existing employee. Following is the mark-up on the page:

<!doctype html>

<html>

<head>

<title>Edge.js sample</title>

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"/>

</head>

<body ng-app="edgeCrudApp">

<div class="container" ng-controller="EdgeCrudController as vm">

<div class="text-center">

<h1>Node-Edge-.NET CRUD Application</h1>

<hr/>

<div class="col-md-12">

<form name="vm.addEditEmployee">

<div class="control-group">

<input type="text" ng-model="vm.employee.Name" placeholder="Name" />

<input type="text" ng-model="vm.employee.Occupation" placeholder="Occupation" />

<input type="text" ng-model="vm.employee.Salary" placeholder="Salary" />

<input type="text" ng-model="vm.employee.City" placeholder="City" />

<input type="button" class="btn btn-primary" ng-click="vm.addOrEdit()" value="Add or Edit" />

<input type="button" class="btn" value="Reset" ng-click="vm.reset()" />

</div>

</form>

</div>

<br/>

<div class="col-md-10">

<table class="table">

<thead>

<tr>

<th style="text-align: center">Name</th>

<th style="text-align: center">Occupation</th>

<th style="text-align: center">Salary</th>

<th style="text-align: center">City</th>

<th style="text-align: center">Edit</th>

</tr>

</thead>

<tbody>

<tr ng-repeat="emp in vm.employees">

<td>{{emp.Name}}</td>

<td>{{emp.Occupation}}</td>

<td>{{emp.Salary}}</td>

<td>{{emp.City}}</td>

<td>

<button class="btn" ng-click="vm.edit(emp)">Edit</button>

</td>

</tr>

</tbody>

</table>

</div>

</div>

</div>

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.3/angular.min.js"></script>

<script src="app.js"></script>

</body>

</html>

Add a new folder to the application and name it ‘scripts’. Add a new JavaScript file to this folder and name it ‘app.js’. This file will contain the client side script of the application. Since we are building an Angular.js application, the file will have an Angular module with a controller and a service added to it. Functionality of the file includes:

  • Getting list of employees on page load
  • Adding an employee or, editing employee using the same form
  • Resetting the form to pristine state once the employee is added or, edited

Here’s the code for this file:

(function(){

var app = angular.module('edgeCrudApp', []);

app.controller('EdgeCrudController', function (edgeCrudSvc) {

var vm = this;

function getAllEmployees(){

edgeCrudSvc.getEmployees().then(function (result) {

vm.employees = result;

}, function (error) {

console.log(error);

});

}

vm.addOrEdit = function () {

vm.employee.Salary = parseInt(vm.employee.Salary);

if(vm.employee.Id) {

edgeCrudSvc.editEmployee(vm.employee)

.then(function (result) {

resetForm();

getAllEmployees();

}, function (error) {

console.log("Error while updating an employee");

console.log(error);

});

}

else{

edgeCrudSvc.addEmployee(vm.employee)

.then(function (result) {

resetForm();

getAllEmployees();

}, function (error) {

console.log("Error while inserting new employee");

console.log(error);

});

}

};

vm.reset= function () {

resetForm();

};

function resetForm(){

vm.employee = {};

vm.addEditEmployee.$setPristine();

}

vm.edit = function(emp){

vm.employee = emp;

};

getAllEmployees();

});

app.factory('edgeCrudSvc', function ($http) {

var baseUrl = '/api/employees';

function getEmployees(){

return $http.get(baseUrl)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

function addEmployee(newEmployee){

return $http.post(baseUrl, newEmployee)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

function editEmployee(employee){

return $http.put(baseUrl + '/' + employee.Id, employee)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

return {

getEmployees: getEmployees,

addEmployee: addEmployee,

editEmployee: editEmployee

};

});

}());

Save all the files and run the application. You should be able to add and edit employees. I am leaving the task of deleting employee as an assignment to the reader.

Conclusion

In general, it is challenging to make two different frameworks talk to each other. Edge.js takes away the pain of integrating two frameworks and provides an easier and cleaner way to take advantage of good features of .NET and Node.js together to build great applications. It aligns with the Node.js event loop model and respects execution model of the platform as well. Let’s thank Tomasz Jancjuk for his great work and use this tool effectively!

Download the entire source code of this article (Github)

Happy Coding Smile