How to – Work with Git Hooks?


 

What is Git Hooks?

A git hook is a script that git executes before or after a relevant git event or action is triggered.

Git hooks are scripts that Git executes before or after events such as: commit, push, and receive. Git hooks are a built-in feature - no need to download anything.

Where we can use Git Hook?

  • Commit code only if lint and build
  • Commit message format policy
  • Prevent pushes or merges that don’t conform to certain standards or meet guideline expectations
  • Facilitate continuous deployment
  • Connecting issue tracker with commit policy
  • Custom validations for master branch push
  • And many more….

Types of Git Hooks

  • Local Hooks
    • pre-commit: Runs before finishing commit
    • prepare-commit-msg: Provide a default commit message if one is not given.
    • commit-msg: Commit message validation.
    • post-commit: Runs after a successful commit.
    • post-checkout: Runs after every checkout.
    • pre-rebase: Runs before git rebase.
    • post-merge: Runs after a successful merge.
  • Server side Hooks
    • pre-receive
    • update
    • post-receive

How to use it in Node.js based projects?

Using git hooks with any package.json based project is very simple and everyone in a team doesn’t have to modify local hooks files manually.
Two best ways of using centralized git hooks from npm:

Install: npm install husky --save-dev // Edit package.json { "scripts": { "precommit": "npm test", "prepush": "npm test", "...": "..." } } git commit -m "Keep calm and commit" // Existing hooks aren't replaced and you can use any Git hook. // If you're migrating from ghooks, simply run npm uninstall ghooks --save-dev && npm install husky --save-dev and edit package.json. Husky will automatically migrate ghooks hooks.

Install: npm install --save-dev pre-commit { "scripts": { "test": "echo \"Error: I SHOULD FAIL LOLOLOLOLOL \" && exit 1", "foo": "echo \"fooo\" && exit 0", "bar": "echo \"bar\" && exit 0" }, "pre-commit": [ "foo", "bar", "test" ] }

alt tag

How to use Git Hooks with Gulp?

var gulp = require('gulp'); var guppy = require('git-guppy')(gulp); // Then simply define some gulp tasks in your gulpfile.js // whose names match whichever git-hooks you want to be triggerable by git. gulp.task('pre-commit', function () { // see below }); // less contrived example gulp.task('pre-commit', guppy.src('pre-commit', function (filesBeingCommitted) { return gulp.src(filesBeingCommitted) .pipe(gulpFilter(['*.js'])) .pipe(jshint()) .pipe(jshint.reporter(stylish)) .pipe(jshint.reporter('fail')); })); // another contrived example gulp.task('pre-push', guppy.src('pre-push', function (files, extra, cb) { var branch = execSync('git rev-parse --abbrev-ref HEAD'); if (branch === 'master') { cb('Don\'t push master!') } else { cb(); } }));

How to customize hooks manually?

If you want to write your custom script for any hook, here is how you can do it.
  • Open .git/hooks directory under your project directory [cd ./.git/hooks && ls]
  • you will see following sample hooks files

applypatch-msg.sample pre-applypatch.sample pre-commit.sample prepare-commit-msg.sample commit-msg.sample post-update.sample pre-push.sample pre-rebase.sample update.sample

  • The .sample extension prevents them from being run, so to enable them, remove the .sample extension from the script name.
  • Local hooks order of execution

<pre-commit> | <prepare-commit-msg> | <commit-msg> | <post-commit>

  • Server side hooks order of execution

<pre-receive> | <update> | <post-receive>

commit-msg

#!/bin/sh # # Automatically adds branch name and branch description to every commit message. # NAME=$(git branch | grep '*' | sed 's/* //') DESCRIPTION=$(git config branch."$NAME".description) TEXT=$(cat "$1" | sed '/^#.*/d') if [ -n "$TEXT" ] then echo "$NAME"': '$(cat "$1" | sed '/^#.*/d') > "$1" if [ -n "$DESCRIPTION" ] then echo "" >> "$1" echo $DESCRIPTION >> "$1" fi else echo "Aborting commit due to empty commit message." exit 1 fi

pre-commit

# 1. # This is a pre commit hook to automatically run linters (tslint and build) # before a commit can be made. I created this to force myself to run the linters # before commiting. # #!/bin/sh pass=true RED='\033[1;31m' GREEN='\033[0;32m' NC='\033[0m' echo "Running Linters:" # Run tslint and get the output and return code tslint=$(npm run lint) ret_code=$? # If it didn't pass, announce it failed and print the output if [ $ret_code != 0 ]; then printf "\n${RED}tslint failed:${NC}" echo "$tslint\n" pass=false else printf "${GREEN}tslint passed.${NC}\n" fi echo "-----------------------------------" echo "Running Build:" # Run build and get the output and return code build=$(npm run build) ret_code=$? if [ $ret_code != 0 ]; then printf "${RED}Build failed:${NC}" echo "$Build\n" pass=false else printf "${GREEN}Build passed.${NC}\n" fi # If there were no failures, it is good to commit if $pass; then exit 0 fi exit 1

# 2. # This is a pre commit hook to automatically run linters (tslint and stylelint) # before a commit can be made. I created this to force myself to run the linters # before commiting. # #!/bin/sh pass=true RED='\033[1;31m' GREEN='\033[0;32m' NC='\033[0m' echo "Running Linters:" # Run tslint and get the output and return code tslint=$(npm run tslint) ret_code=$? # If it didn't pass, announce it failed and print the output if [ $ret_code != 0 ]; then printf "\n${RED}tslint failed:${NC}" echo "$tslint\n" pass=false else printf "${GREEN}tslint passed.${NC}\n" fi # Run stylelint and get the output and return code stylelint=$(npm run stylelint) ret_code=$? if [ $ret_code != 0 ]; then printf "${RED}stylelint failed:${NC}" echo "$stylelint\n" pass=false else printf "${GREEN}stylelint passed.${NC}\n" fi # If there were no failures, it is good to commit if $pass; then exit 0 fi exit 1

# 3. #!/bin/bash # # This pre-commit hook checks that you havn't left and DONOTCOMMIT tokens in # your code when you go to commit. # # To use this script copy it to .git/hooks/pre-commit and make it executable. # # This is provided just as an example of how to use a pre-commit hook to # catch nasties in your code. # Work out what to diff against, really HEAD will work for any established repository. if git rev-parse --verify HEAD >/dev/null 2>&1 then against=HEAD else # Initial commit: diff against an empty tree object against=4b825dc642cb6eb9a060e54bf8d69288fbee4904 fi diffstr=`git diff --cached $against | grep -e '^\+.*DONOTCOMMIT.*$'` if [[ -n "$diffstr" ]] ; then echo "You have left DONOCOMMIT in your changes, you can't commit until it has been removed." exit 1 fi

Learn more about Git Hooks

Todo

  • More automated scripts to come…
Author
[Bhavin Patel]

JSON-Server as a Fake REST API in Frontend Development


Frontend development is changing day by day and we have to learn a lot more stuff. When we start learning a new framework or library, the first thing that is recommended to build a todo list which helps in doing all CRUD functions. But there is no solid backend/library available to make use of it to build a todo list.

Simulate a backend server and a REST API with a simple JSON file.

To overcome that problem json-server came into the picture. With it, we can make a fake REST api. I have used it in my app and thought of sharing it to the frontend community.

JSON Server is an npm package that you can create a REST JSON webservice. All we need is a JSON file and that will be used as our backend REST.

#Installing JSON Server

You can either install it locally for specific project or globally. I will go with locally.

$ npm install -D json-server

Above is a single line command to install the json server. -D Package will appear in your devDependencies. I am not going to explain about that much here. If you want to learn more about that go through the docs for npm install. Check JSON Server version using json-server -v.

#JSON file

As per the standard convention, I am going to name the file db.json, you can name it as per your needs.

{
  "Todos": [
    {
      "id": 1,
      "todo": "Check Todo"
    },
    {
      "id": 2,
      "todo": "New Todo"
    }
  ]
}

For simplicity I have included two elements into the Todos array. You can add more also.

#Start the JSON Server

$ json-server --watch db.json

TerminalYour JSON Server will be running on port 3000.

Now that we have our server and API running, we can test it and access it with a tool like POSTman or Insomnia.

These are REST clients that help us run http calls.

#CRUD Operations

Moving onto the CRUD operations. This is how we can access our data using RESTful routes.

Routing Url's
[GET] http://localhost:3000/Todos
[POST] http://localhost:3000/Todos post params:!
[PUT] http://localhost:3000/Todos post params:!
[DELETE] http://localhost:3000/Todos/id

#Testing via POSTman

GET REQUEST

POSTman

POST REQUEST

POSTman

PUT REQUEST

POSTman

DELETE REQUEST

POSTman

 

Thanks to Madhankumar for this article help.

Happy Coding 🙂

 

Maintaining a Private NPM registry for your Organization with Sinopia–Private NPM server


In this post, we will take a quick dig at setting up our own private Node Package Registry with Sinopia.

“sinopia is a private/caching npm repository server”

Maintaining a private NPM for your organization/team is very helpful when you would like to share the code only with them and not the entire universe. You can develop and maintain all your organization specific projects and their components as node packages that can be reused.

If there are multiple teams working, each of them can upload their own reusable node modules to the private NPM and other teams can use them. For example a File upload component or an organization specific HTML5 boilerplate and so on. This way, when you need something all you have to do is run npm install my-h5bp , which will download the boilerplate for you.

So, let us get started.

Official Approach

The folks at NPM have given us a process on how we can replicate the NPM privately.

Screen Shot 2014-09-21 at 10.16.33 amYou can check out this blog post to achieve the same. But this process is quite complex.

Another enterprise solution for setting up your private NPM is NPME – NPM Enterprise. You can find more info on that here : npm Enterprise. A quick video

Other Approaches

There are couple of other solutions like Kappa and node-reggie, which enable you to set up a private NPM with few quick steps.

But some how I was drawn to sinopia for its quick installation, setup and the ease of use. Sinopia uses a file system to manage the registry. The best part about Sinopia is, it does not sync the public registry by default, but it will cache the packages only when downloaded for the first time. You can save some space on your server (with the fast growing public registry and all).

Setup and Configure Sinopia

Setting up Sinopia as mentioned earlier is pretty easy. You need to have Node installed on the machine, where you are setting up the private NPM. Open terminal/prompt and  CD  to the root folder

Windows
Mac/*nix

> cd /
$ cd ~

Inside the root directory, create a new directory named sinopia

mkdir sinopia && cd sinopia

Next, we will install sinopia globally. Run

npm install -g sinopia

(use sudo if needed)

Now, we will start the server run

sinopia

If you are launching the server for the first time, you will be asked to create a config file. Select Yes.

Note : If you launch the server from a different folder and if the folder does not have the config file you will be asked this question again. (I do not know why!) So make sure you launch the server from the same folder, the one we have cd ‘ed into earlier.

Screen Shot 2014-09-21 at 11.09.19 am

Now, when you navigate to http://localhost:4873/&nbsp; you will see a message like

1

Web interface is a work-in-progress right now, so it is disabled by default. If you want to play with it, you can enable it in the config file.

To enable the web interface, we need to tweak the config file. From the current folder, open theconfig.yaml file. Scroll to a section named the web and set enabled to true.

Screen Shot 2014-09-21 at 11.00.55 amKill the sinopia server and relaunch it. Now navigate to  http://localhost:4873&nbsp; and you should see the web interface

Screen Shot 2014-09-21 at 11.02.35 amSweet right!!

[Important Step]

Now, from a client machine where the private NPM will be accessed from, you need to set the NPM registry URL pointing to our Private NPM server and not the public registry.

If your server and client are same, you can run

npm set registry http://localhost:4873

And if your server and client are on different machines, you need to run

npm set registry http://path.to.your.server:4873

In your config.yaml you can configure the host and port too. Navigate to the Advanced section and you can updated the config like

Screen Shot 2014-09-21 at 11.16.22 amKill sinopia and restart it. Now your web URL will be  http://localhost:2772. Reverting the port back to 4873, we will continue.

Setup User

By default there is an Admin user setup, which you can find under the users section in the config.yaml.

Screen Shot 2014-09-21 at 11.22.19 amThis is the default admin account and the sha1 encrypted password. I am not sure what this password is, so I will reset to admin$123.

For that, open a new terminal/prompt. Run node  then execute

crypto.createHash(‘sha1’).update(‘admin$123’).digest(‘hex’) and paste it back to the config file

Screen Shot 2014-09-21 at 11.29.11 amSave the config file after pasting the generated hash, restart sinopia and run

Screen Shot 2014-09-21 at 11.34.59 amThe password would be admin$123.

You are logged in!!

To restrict access to the private registry, you can run

npm set always-auth true

This way you can secure all your private packages (more on conditional securing in a moment).

Now, a new client who would like to access the registry would need to create an account by running

npm adduser –registry http://localhost:4873/

Provide user details and bam!! You have a newly authenticated account for your private registry

Screen Shot 2014-09-21 at 11.48.24 am Now you will see a new file created inside the sinopia folder

sinopia folder

Shell

sinopia  tree

|– config.yaml

`– htpasswd

0 directories, 2 files

The htpasswd file consists of the authentication data. It is better not to mess with this file.

Downloading packages

Now that we have everything setup, we will download our first package. Create a new folder named privateProj and open a new terminal/prompt inside this folder

Now run

npm init

to start a new node project. Next, we will install diskDB a node package for managing JSON files.  If you do notice your sinopia folder, you will see that there are only 2 files. The config and the htpasswd file. Now once we start pulling the packages, a new folder named storage will be created. You can configure the storage folder path in config.yml. Screen Shot 2014-09-21 at 11.59.23 am

From inside the privateProj run

npm install diskdb –save

And you should see

Screen Shot 2014-09-21 at 12.01.14 pmDo remember, diskdb is not in our private registry. Sinopia has fetched this package from the public repo. Now, if you see the sinopia folder, you will see

Screen Shot 2014-09-21 at 12.02.58 pmIf the public registry is offline or not available for some reason, sinopia will fetch the packages from the cache.  You can configure this behaviour in the uplinks section.

Screen Shot 2014-09-21 at 12.06.12 pmNow that we have diskdb installed in our privateProj, we will build a simple app and publish it to our private registry.

Build & Publish a Node module

You can check out my post Write your own Node Modules on how to create your own node modules and publish it.

Do note that as long as your registry URL is pointed to the private repo, the publish command would not save the package to the public registry.

Now, to build a sample node module/reusable component, we will create a folder named dband a file named app.js inside the privateProj folder.

Update the app.js as below

app.js

JavaScript

var db = require(‘diskdb’);

db = db.connect(‘db’, [‘fruits’]);

var fruits = [‘Apple’, ‘Mango’, ‘Orange’];

db.fruits.save(fruits);

var printFruits = function() {

console.log(db.fruits.find());

}

// To test the app locally

// printFruits(); // << uncomment this

// and run

// $ node app.js

exports.printFruits = printFruits;

Now, our app is ready. We would like to share this app with our team, so they can reuse theprintFruits().

For that we would be publishing this app to our private registry. Run

npm publish

and you should see something like

Screen Shot 2014-09-21 at 12.20.02 pmThe user arvind does not have access to publish to our private registry. For this to work, we need to provide access. Open config.yaml and scroll down to the packages section and add the user name to allow_publish.Screen Shot 2014-09-21 at 12.23.18 pmSave the file and restart the sinopia server. Now when you publish you should see

Screen Shot 2014-09-21 at 12.25.16 pmAnd when you navigate to  http://localhost:4873 you should see

Screen Shot 2014-09-21 at 12.26.08 pm

Now, if you see the sinopia folder,

Screen Shot 2014-09-21 at 12.27.11 pm

Simple and easy right!!

Now, if someone wants to use your package all they need to do is run

npm install privateProj –save

The same way one would from the public registry and you should see

Screen Shot 2014-09-21 at 12.29.20 pm

And then in your project, you can directly use

myApp.js

JavaScript

var pp = require(‘privateProj’);

pp.printFruits();

And run

node myApp.js

If you want to maintain a different “namespace” for your private packages, you can do so by prefixing your packages with your organization name or your project name and update theconfig.yaml like

Screen Shot 2014-09-21 at 12.56.19 pm

You can also setup a different storage for your private repos too.

That is all you need to setup your private NPM and manage packages.

Note : If you followed the above post on a local machine and wanted to revert your npm registry to the public one, execute

npm config set registry https://registry.npmjs.org

or

npm config set registry http://registry.npmjs.org


How to host Sinopia in IIS on Windows?

package.json

{
  "name": "iisnode-sinopia",
  "version": "1.0.0",
  "description": "Hosts sinopia in iisnode",
  "main": "start.js",
  "dependencies": {
    "sinopia": "^1.3.1"
  }
}

image

start.js

process.argv.push('-l', 'unix:' + process.env.PORT);
require('./node_modules/sinopia/lib/cli.js');

web.config

<configuration> <system.webServer> <!-- indicates that the start.js file is a node.js application to be handled by the iisnode module --> <handlers> <add name="iisnode" path="start.js" verb="*" modules="iisnode" /> </handlers> <rewrite> <rules> <!-- iisnode folder is where iisnode stores it's logs. These should never be rewritten --> <rule name="iisnode" stopProcessing="true"> <match url="iisnode*"/> <action type="None"/> </rule> <!-- Rewrite all other urls in order for sinopia to handle these --> <rule name="sinopia"> <match url="/*" /> <action type="Rewrite" url="start.js" /> </rule> </rules> </rewrite> <!-- exclude node_modules directory and subdirectories from serving by IIS since these are implementation details of node.js applications --> <security> <requestFiltering> <hiddenSegments> <add segment="node_modules" /> </hiddenSegments> </requestFiltering> </security> </system.webServer> </configuration>

—————————————————————————————————————————

Sinopia Commands

npm publish (to publish package) or npm publish –tag <tag>

npm unpublish –force (to unpublish / delete package)

npm install / update (Installing packages (npm install, npm upgrade, etc.)

npm dist-tag add <package>@<version>

npm adduser {newuser}

npm search

Useful Links

1. http://resolvethis.com/how-to-create-a-new-sinopia-user/

2. http://resolvethis.com/how-to-setup-a-private-npm-repository-with-sinopia/

3. https://www.npmjs.com/package/sinopia

4. http://perrymitchell.net/article/private-npm-repository-with-sinopia/

 

Happy Coding Smile

LoopBack.io with Node.js (How-to)


Let LoopBack Do It: A Walkthrough of the Node API Framework You’ve Been Dreaming Of

BY JOVAN JOVANOVIC

It’s needless to mention the growing popularity of Node.js for application development. eBay has been running a production Node API service since 2011. PayPal is actively rebuilding their front-end in Node. Walmart’s mobile site has become the biggest Node application, traffic wise. On Thanksgiving weekend in 2014, Walmart servers processed 1.5 billion requests, 70 percent of which were delivered through mobile and powered by Node.js. On the development side, the Node package manager (npm) continues to grow rapidly, recently surpassing 150,000 hosted modules.

While Ruby has Rails and Python has Django, the dominant application development framework for Node has yet to be established. But, there is a powerful contender gaining steam: LoopBack, an open source API framework built by San Mateo, Calif., company StrongLoop. StrongLoop is an important contributor to the latest Node version, not to mention the current maintainers of Express, one of the most popular Node frameworks in existence.

IMAGE: loopback and node

Let’s take a closer look at LoopBack and it’s capabilities by turning everything into practice and building an example application.

What is LoopBack and How Does It Work with Node?

LoopBack is a framework for creating APIs and connecting them with backend data sources. Built on top of Express, it can take a data model definition and easily generate a fully functional end-to-end REST API that can be called by any client.

LoopBack comes with a built-in client, API Explorer. We’ll use this since it makes it easier to see the results of our work, and so that our example can focus on building the API itself.

You will of course need Node installed on your machine to follow along. Get it here. npm comes with it, so you can install the necessary packages easily. Let’s get started.

Create a Skeleton

Our application will manage people who would like to donate gifts, or things they just don’t need anymore, to somebody who might need them. So, the users will be Donors and Receivers. A Donor can create a new gift and see the list of gifts. A Receiver can see the list of gifts from all users, and can claim any that are unclaimed. Of course, we could build Donors and Receivers as separate roles on the same entity (User), but let’s try separating them so we can see how to build relations in LoopBack. The name of this groundbreaking application will be Givesomebody.

Install the StrongLoop command line tools through npm:

$ npm install -g strongloop

Then run LoopBack’s application generator:

$ slc loopback

     _-----_
    |       |    .--------------------------.
    |--(o)--|    |  Let's create a LoopBack |
   `---------´   |       application!       |
    ( _´U`_ )    '--------------------------'
    /___A___\    
     |  ~  |     
   __'.___.'__   
 ´   `  |° ´ Y ` 

? What's the name of your application? Givesomebody

Let’s add a model. Our first model will be called Gift. LoopBack will ask for the data source and base class. Since we haven’t set up the data source yet, we can put db (memory). The base class is an auto-generated model class, and we want to use PersistedModel in this case, as it already contains all the usual CRUD methods for us. Next, LoopBack asks if it should expose the model through REST (yes), and the name of the REST service. Press enter here to use the default, which is simply the plural of the model name (in our case, gifts).

$ slc loopback:model

? Enter the model name: Gift
? Select the data-source to attach Gift to: (Use arrow keys)
❯ db (memory)
? Select model's base class: (Use arrow keys)
  Model
❯ PersistedModel
? Expose Gift via the REST API? (Y/n) Yes
? Custom plural form (used to build REST URL):

Finally, we give the names of properties, their data types, and required/not-required flags. Gift will have nameand description properties:

Let's add some Gift properties now.

Enter an empty property name when done.
? Property name: name
   invoke   loopback:property
? Property type: (Use arrow keys)
❯ string
? Required? (y/N)Yes

Enter an empty property name to indicate you are done defining properties.

The model generator will create two files which define the model in the application’s common/models: gift.json and gift.js. The JSON file specifies all metadata about the entity: properties, relations, validations, roles and method names. The JavaScript file is used to define additional behaviour, and to specify remote hooks to be called before or after certain operations (e.g., create, update, or delete).

The other two model entities will be our Donor and Receiver models. We can create them using the same process, except this time let’s put User as the base class. It will give us some properties like username, password, email out of the box. We can add just name and country, for example, to have a full entity. For the Receiver we want to add the delivery address, too.

Project Structure

Let’s have a look at the generated project structure:

IMAGE: Project Structure

The three main directories are: – /server – Contains node application scripts and configuration files. – /client – Contains .js, .html, .css, and all other static files. – /common – This folder is common to both the server and the client. Model files go here.

Here’s a detailed breakdown of the contents of each directory, taken from the LoopBack documentation:

File or directory
Description
How to access in code

Top-level application directory

package.json
Standard npm package specification. See package.json
N/A

/server directory – Node application files 

server.js
Main application program file.
N/A

config.json
Application settings. See config.json.
app.get('setting-name')

datasources.json
Data source configuration file. See datasources.json. For an example, see Create new data source.
app.datasources['datasource-name']

model-config.json
Model configuration file. See model-config.json. For more information, see Connecting models to data sources.
N/A

middleware.json
Middleware definition file. For more information, see Defining middleware.
N/A

/boot directory
Add scripts to perform initialization and setup. See boot scripts.
Scripts are automatically executed in alphabetical order.

/client directory – client application files

README.md
LoopBack generators create empty README file in markdown format.
N/A

Other
Add your HTML, CSS, client JavaScript files.

/common directory – shared application files

/modelsdirectory
Custom model files:

  • Model definition JSON files, by convention named model-name.json; for example customer.json.
  • Custom model scripts by convention named model-name.js; for example, customer.js.

For more information, see Model definition JSON file andCustomizing models.
Node:
myModel = app.models.myModelName

Build Relationships

In our example, we have a few important relationships to model. A Donor can donate many Gifts, which gives the relation Donor has many Gift. A Receiver can also receive many Gifts, so we also have the relationReceiver has many Gift. On the other side, Gift belongs to Donor, and can also belong to Receiver if the Receiver chooses to accept it. Let’s put this into the language of LoopBack.

$ slc loopback:relation

? Select the model to create the relationship from: Donor
? Relation type: has many
? Choose a model to create a relationship with: Gift
? Enter the property name for the relation: gifts
? Optionally enter a custom foreign key:
? Require a through model? No

Note that there is no through model; we are just holding the reference to the Gift.

If we repeat the above procedure for Receiver, and add two belongs to relations to Gift, we will accomplish our model design on a back end side. LoopBack automatically updates the JSON files for the models to express exactly what we just did through theses simple dialogs:

// common/models/donor.json
  ...
  "relations": {
    "gifts": {
      "type": "hasMany",
      "model": "Gift",
      "foreignKey": ""
    }
  },
  ...

Add a Datasource

Now let’s see how to attach a real datasource to store all of our application data. For the purposes of this example, we will use MongoDB, but LoopBack has modules to connect with Oracle, MySQL, PostgreSQL, Redis and SQL Server.

First, install the connector:

$ npm install --save loopback-connector-mongodb

Then, add a datasource to your project:

$ slc loopback:datasource

? Enter the data-source name: givesomebody
? Select the connector for givesomebody: MongoDB (supported by StrongLoop)

The next step is to configure your datasource in server/datasources.json. Use this configuration for a local MongoDB server:

  ...
  "givesomebody": {
    "name": "givesomebody",
    "connector": "mongodb",
    "host": "localhost",
    "port": 27017,
    "database": "givesomebody",
    "username": "",
    "password": ""
  }
  ...

Finally, open server/model-config.json and change the datasource for all entities we want to persist in the database to "givesomebody".

{
  ...
  "User": {
    "dataSource": "givesomebody"
  },
  "AccessToken": {
    "dataSource": "givesomebody",
    "public": false
  },
  "ACL": {
    "dataSource": "givesomebody",
    "public": false
  },
  "RoleMapping": {
    "dataSource": "givesomebody",
    "public": false
  },
  "Role": {
    "dataSource": "givesomebody",
    "public": false
  },
  "Gift": {
    "dataSource": "givesomebody",
    "public": true
  },
  "Donor": {
    "dataSource": "givesomebody",
    "public": true
  },
  "Receiver": {
    "dataSource": "givesomebody",
    "public": true
  }
}

Testing Your REST API

It’s time to see what we’ve built so far! We’ll use the awesome built-in tool, API Explorer, which can be used as a client for the service we just created. Let’s try testing REST API calls.

In a separate window, start MongoDB with:

$ mongod

Run the application with:

$ node .

In your browser, go to http://localhost:3000/explorer/. You can see your entities with the list of operations available. Try adding one Donor with a POST /Donors call.

IMAGE: Testing Your API 2

IMAGE: Testing Your API 3

API Explorer is very intuitive; select any of the exposed methods, and the corresponding model schema will be displayed in the bottom right corner. In the data text area, it is possible to write a custom HTTP request. Once the request is filled in, click the “Try it out” button, and the server’s response will be displayed below.

IMAGE: Testing Your API 1

User Authentication

As mentioned above, one of the entities that comes pre-built with LoopBack is the User class. User possesses login and logout methods, and can be bound to an AccessToken entity which keeps the token of the specific user. In fact, a complete user authentication system is ready to go out of the box. If we try calling /Donors/login through API Explorer, here is the response we get:

{
  "id": "9Kvp4zc0rTrH7IMMeRGwTNc6IqNxpVfv7D17DEcHHsgcAf9Z36A3CnPpZJ1iGrMS",
  "ttl": 1209600,
  "created": "2015-05-26T01:24:41.561Z",
  "userId": ""
}

The id is actually the value of the AccessToken, generated and persisted in the database automatically. As you see here, it is possible to set an access token and use it for each subsequent request.

IMAGE: User Authentication

Like what you’re reading?

Get the latest updates first.

No spam. Just great engineering and design posts.

Remote Methods

A remote method is a static method of a model, exposed over a custom REST endpoint. Remote methods can be used to perform operations not provided by LoopBack’s standard model REST API.

Beside the CRUD methods that we get out of the box, we can add as many custom methods as we want. All of them should go into the [model].js file. In our case, let’s add a remote method to the Gift model to check if the gift is already reserved, and one to list all gifts that are not reserved.

First, let’s add an additional property to the model called reserved. Just add this to the properties in gift.json:

    ...
    "reserved": {
      "type": "boolean"
    }
    ...

The remote method in gift.js should look something like this:

module.exports = function(Gift) {

    // method which lists all free gifts
    Gift.listFree = function(cb) {
        Gift.find({
            fields: {
                reserved: false
            }
        }, cb);
    };

    // expose the above method through the REST
    Gift.remoteMethod('listFree', {
        returns: {
            arg: 'gifts',
            type: 'array'
        },
        http: {
            path: '/list-free',
            verb: 'get'
        }
    });

    // method to return if the gift is free
    Gift.isFree = function(id, cb) {
        var response;
        Gift.find({
            fields: {
                id: id
            }
        }, function(err, gift) {
            if (err) return cb(err);

            if (gift.reserved)
                response = 'Sorry, the gift is reserved';
            else
                response = 'Great, this gift can be yours';

        });
        cb(null, response);
    };

    // expose the method through REST
    Gift.remoteMethod('isFree', {
        accepts: {
            arg: 'id',
            type: 'number'
        },
        returns: {
            arg: 'response',
            type: 'string'
        },
        http: {
            path: '/free',
            verb: 'post'
        }
    });
};

So to find out if a particular gift is available, the client can now send a POST request to /api/Gifts/free, passing in the id of the gift in question.

Remote Hooks

Sometimes there is a need for execution of some method before or after the remote method. You can define two kinds of remote hooks:

  • beforeRemote() runs before the remote method.
  • afterRemote() runs after the remote method.

In both cases, you provide two arguments: a string that matches the remote method to which you want to “hook” your function, and the callback function. Much of the power of remote hooks is that the string can include wildcards, so it is triggered by any matching method.

In our case, let’s set a hook to print information to the console whenever a new Donor is created. To accomplish this, let’s add a “before create” hook in donor.js:

module.exports = function(Donor) {
    Donor.beforeRemote('create', function(context, donor, next) {
        console.log('Saving new donor with name: ', context.req.body.name);
    
        next();
    });
};

The request is called with the given context, and the next() callback in middleware (discussed below) is called after the hook runs.

Access Controls

LoopBack applications access data through models, so controlling access to data means defining restrictions on models; that is, specifying who or what can read and write the data or execute methods on the models. LoopBack access controls are determined by access control lists, or ACLs.

Let’s allow unlogged-in Donors and Receivers to view gifts, but only logged-in Donors to create and delete them.

$ slc loopback:acl

To begin, let’s deny everyone access to all endpoints.

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: All methods and properties
? Select the access type: All (match all types)
? Select the role: All users
? Select the permission to apply: Explicitly deny access

Next, allow everyone to read from Gift models:

$ slc loopback:acl

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: All methods and properties
? Select the access type: Read
? Select the role: All users
? Select the permission to apply: Explicitly grant access

Then, we want to allow authenticated users to create Gifts:

$ slc loopback:acl

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: A single method
? Enter the method name: create
? Select the role: Any authenticated user
? Select the permission to apply: Explicitly grant access

And finally, let’s allow the owner of the gift to make any changes:

$ slc loopback:acl

? Select the model to apply the ACL entry to: Gift
? Select the ACL scope: All methods and properties
? Select the access type: Write
? Select the role: The user owning the object
? Select the permission to apply: Explicitly grant access

Now when we review gift.json, everything should be in place:

"acls": [
  {
    "accessType": "*",
    "principalType": "ROLE",
    "principalId": "$everyone",
    "permission": "DENY"
  },
  {
    "accessType": "READ",
    "principalType": "ROLE",
    "principalId": "$everyone",
    "permission": "ALLOW"
  },
  {
    "accessType": "EXECUTE",
    "principalType": "ROLE",
    "principalId": "$authenticated",
    "permission": "ALLOW",
    "property": "create"
  }
],

One important note here: $authenticated is a predefined role which corresponds to all users in the system (both Donors and Receivers), but we only want to allow Donors to create new Gifts. Therefore, we need a custom role. As Role is one more entity we get out of the box, we can leverage its API call to create the $authenticatedDonor role in the boot function, and then just modify pricipalId in gift.json.

It will be necessary to create a new file, server/boot/script.js, and add the following code:

Role.create({
    name: 'authenticatedDonor'
}, function(err, role) {
    if (err) return debug(err);
})

The RoleMapping entity maps Roles to Users. Be sure that Role and RoleMapping are both exposed through REST. In server/model-config.json, check that "public" is set to true for the Role entity. Then in donor.js, we can write a “before create” hook that will map the userID and roleID in the RoleMapping POST API call.

Middleware

Middleware contains functions that are executed when a request is made to the REST endpoint. As LoopBack is based on Express, it uses Express middleware with one additional concept, called “middleware phases.” Phases are used to clearly define the order in which functions in middleware are called.

Here is the list of predefined phases, as provided in the LoopBack docs:

  1. initial – The first point at which middleware can run.
  2. session – Prepare the session object.
  3. auth – Handle authentication and authorization.
  4. parse – Parse the request body.
  5. routes – HTTP routes implementing your application logic. Middleware registered via the Express API app.use, app.route, app.get (and other HTTP verbs) runs at the beginning of this phase. Use this phase also for sub-apps like loopback/server/middleware/rest or loopback-explorer.
  6. files – Serve static assets (requests are hitting the file system here).
  7. final – Deal with errors and requests for unknown URLs.

Each phase has three subphases. For example, the subphases of the initial phase are:

  1. initial:before
  2. initial
  3. initial:after

Let’s take a quick look on our default middleware.json:

{
  "initial:before": {
    "loopback#favicon": {}
  },
  "initial": {
    "compression": {},
    "cors": {
      "params": {
        "origin": true,
        "credentials": true,
        "maxAge": 86400
      }
    }
  },
  "session": {
  },
  "auth": {
  },
  "parse": {
  },
  "routes": {
  },
  "files": {
  },
  "final": {
    "loopback#urlNotFound": {}
  },
  "final:after": {
    "errorhandler": {}
  }
}

In the initial phase, we call loopback.favicon() (loopback#favicon is the middleware id for that call). Then, third-party npm modules compression and cors are called (with or without parameters). In the final phase, we have two more calls. urlNotFound is a LoopBack call, and errorhandler is third-party module. This example should demonstrate that a lot of built in calls can be used just like the external npm modules. And of course, we can always create our own middleware and call them through this JSON file.

loopback-boot

To wrap up, let’s mention a module which exports the boot() function that initializes the application. In server/server.js you’ll find the following piece of code, which bootstraps the application:

boot(app, __dirname, function(err) {
    if (err) throw err;
  
    // start the server if `$ node server.js`
    if (require.main === module)
        app.start();
});

This script will search the server/boot folder, and load all the scripts it finds there in alphabetical order. Thus, in server/boot, we can specify any script which should be run at start. One example is explorer.js, which runs API Explorer, the client we used for testing our API.

Got the repetition blues? Don’t build that Node API from scratch again. Let LoopBack do it!

Conclusion

Before I leave you, I would like to mention StrongLoop Arc, a graphical UI that can be used as an alternative to slc command line tools. It also includes tools for building, profiling and monitoring Node applications. For those who are not fans of the command line, this is definitely worth trying.

IMAGE: Conclusion

Generally speaking, LoopBack can save you a lot of manual work since you are getting a lot of stuff out of the box. It allows you to focus on application-specific problems and business logic. If your application is based on CRUD operations and manipulating predefined entities, if you are sick of rewriting the user’s authentication and authorization infrastructure when tons of developers have written that before you, or if you want to leverage all the advantages of a great web framework like Express, then building your REST API with LoopBack can make your dreams come true. It’s a piece of cake!

Source:https://www.toptal.com/nodejs/let-loopback-do-it-a-walkthrough-of-the-node-api-framework-you-ve-been-dreaming-of

Happy Coding Smile

What are the most famous web apps built on top of Node.js?


Here are some Node.js apps that are famous for their scale and ridiculous performance.

Walmart switched over to Node.js on a Black Friday, got more than 200 million visitors that day, and never went above 1% CPU.

LinkedIn rewrote their mobile backend in Node.js, and proceeded to get 20 times the performance out of 1/10 the servers.

Groupon increased page load speed by 50% by switching from Ruby on Rails to Node.js. They also reported being able to launch new features much faster than before.

Paypal did an experiment where two teams built identical apps – one in Java and one in Node.js. The Node.js team built theirs in half the time. The Node.js app had response times that were 50% faster than the Java app.
You can read more about these incredible performance gains (and developer productivity gains)

IBM X-Force Exchange:
The backend (API) runs on Node.js in a CloudFoundry environment. This makes it easy to scale the whole thing horizontally and vertically on demand. The backend handles over 700 TB of Threat Intelligence data for thousands of customers: in a single(!) thread. [IBM X-Force Exchange]

Amazon uses node.js for certain services in their backend. Their newest website is also based on Angular.js. It is likely that their frontend is served by a simple node.js webserver instance. [At least you can use node in AWS: Node.js]

Netflix moves (or moved) from Java to Javascript in their backend. [Building With Node.js At Netflix]

​Many companies and projects are switching to Node.js like:

  1. Klout
  2. Koding
  3. Microsoft
  4. PayPal
  5. Yahoo
  6. simplereach.com
  7. Quad
  8. NodePing
  9. linkedin
  10. Flickr
  11. duckduckgo.com

Happy Coding… Smile

using Edge.js to combine node.js with C#


Getting Familiar with Edge.js

To bring .NET and Node.js together, Edge.js has some pre-requisites. It runs on .NET 4.5, so you must have .NET 4.5 installed. As Node.js treats all I/O and Network calls as slower operations, Edge.js assumes that the .NET routine to be called is a slower operation and handles it asynchronously. The .NET function to be called has to be an asynchronous function as well.

The function is assigned to a delegate of type Func<object, Task<object>>. This means, the function is an asynchronous one that can take any type of argument and return any type of value. Edge.js takes care of converting the data from .NET type to JSON type and vice-versa. Because of this process of marshalling and unmarshalling, the .NET objects should not have circular references. Presence of circular references may lead to infinite loops while converting the data from one form to the other.

Hello World using Edge

Edge.js can be added to a Node.js application through NPM. Following is the command to install the package and save it to package.json file:

> npm install edge --save

The edge object can be obtained in a Node.js file as:

var edge = require('edge');

The edge object can accept inline C# code, read code from a .cs or .csx file, and also execute the code from a compiled dll. We will see all of these approaches.

To start with, let’s write a “Hello world” routine inline in C# and call it using edge. Following snippet defines the edge object with inline C# code:

var helloWorld = edge.func(function () {

/*async(input) => {

return "Hurray! Inline C# works with edge.js!!!";

}*/

});

The asynchronous and anonymous C# function passed in the above snippet is compiled dynamically before calling it. The inline code has to be passed as a multiline comment. The method edge.func returns a proxy function that internally calls the C# method. So the C# method is not called till now. Following snippet calls the proxy:

helloWorld(null, function(error, result) {

if (error) {

console.log("Error occured.");

console.log(error);

return;

}

console.log(result);

});

In the above snippet, we are passing a null value to first parameter of the proxy as we are not using the input value. The callback function is similar to any other callback function in Node.js accepting error and result as parameters.

We can rewrite the same Edge.js proxy creation by passing the C# code in the form of a string instead of a multiline comment. Following snippet shows this:

var helloWorld = edge.func(

'async(input) => {'+

'return "Hurray! Inline C# works with edge.js!!!";'+

'}'

);

We can pass a class in the snippet and call a method from the class as well. By convention, name of the class should be Startup and name of the method should be Invoke. The Invoke method will be attached to a delegate of type Func<object, Task<object>>. The following snippet shows usage of class:

var helloFromClass = edge.func(function () {

/*

using System.Threading.Tasks;

public class Startup

{       

public async Task<object> Invoke(object input)

{

return "Hurray! Inline C# class works with edge.js!!!";

}

} */

});

It can be invoked the same way we did previously:

helloFromClass(10, function (error, result) {

if(error){

console.log("error occured...");

console.log(error);

return;

}

console.log(result);

});

A separate C# file

Though it is possible to write the C# code inline, being developers, we always want to keep the code in a separate file for better organization of the code. By convention, this file should have a class called Startup with the method Invoke. The Invoke method will be added to the delegate of type Func<object, Task<object>>.

Following snippet shows content in a separate file, Startup.cs:

using System.Threading.Tasks;

public class Startup

{

public async Task<object> Invoke(object input)

{

return new Person(){

Name="Alex",

Occupation="Software Professional",

Salary=10000,

City="Tokyo"

};

}

}

public class Person{

public string Name { get; set; }

public string Occupation { get; set; }

public double Salary { get; set; }

public string City { get; set; }

}

Performing CRUD Operations on SQL Server

Now that you have a basic idea of how Edge.js works, let’s build a simple application that performs CRUD operations on a SQL Server database using Entity Framework and call this functionality from Node.js. As we will have a considerable amount of code to setup Entity Framework and perform CRUD operations in C#, let’s create a class library and consume it using Edge.js.

Creating Database and Class Library

As a first step, create a new database named EmployeesDB and run the following commands to create the employees table and insert data into it:

CREATE TABLE Employees(

Id INT IDENTITY PRIMARY KEY,

Name VARCHAR(50),

Occupation VARCHAR(20),

Salary INT,

City VARCHAR(50)

);

INSERT INTO Employees VALUES

('Ravi', 'Software Engineer', 10000, 'Hyderabad'),

('Rakesh', 'Accountant', 8000, 'Bangalore'),

('Rashmi', 'Govt Official', 7000, 'Delhi');

Open Visual Studio, create a new class library project named EmployeesCRUD and add a new Entity Data Model to the project pointing to the database created above. To make the process of consuming the dll in Edge.js easier, let’s assign the connection string inline in the constructor of the context class. Following is the constructor of context class that I have in my class library:

public EmployeesModel()

: base("data source=.;initial catalog=EmployeesDB;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework;")

{

}

Add a new class to the project and name it EmployeesOperations.cs. This file will contain the methods to interact with Entity Framework and perform CRUD operations on the table Employees. As a best practice, let’s implement the interface IDisposable in this class and dispose the context object in the Dispose method. Following is the basic setup in this class:

public class EmployeesOperations : IDisposable

{

EmployeesModel context;

public EmployeesOperations()

{

context = new EmployeesModel();

}

public void Dispose()

{

context.Dispose();

}

}

As we will be calling methods of this class directly using Edge.js, the methods have to follow signature of the delegate that we discussed earlier. Following is the method that gets all employees:

public async Task<object> GetEmployees(object input)

{

return await context.Employees.ToListAsync();

}

There is a challenge with the methods performing add and edit operations, as we need to convert the input data from object to Employee type. This conversion is not straight forward, as the object passed into the .NET function is a dynamic expando object. We need to convert the object into a dictionary object and then read the values using property names as keys. Following method performs this conversion before inserting data into the database:

public async Task<object> AddEmployee(object emp)

{

var empAsDictionary = (IDictionary<string, object>)emp;

var employeeToAdd = new Employee() {

Name = (string)empAsDictionary["Name"],

City = (string)empAsDictionary["City"],

Occupation = (string)empAsDictionary["Occupation"],

Salary = (int)empAsDictionary["Salary"]

};

var addedEmployee = context.Employees.Add(employeeToAdd);

await context.SaveChangesAsync();

return addedEmployee;

}

The same rule applies to the edit method as well. It is shown below:

public async Task<object> EditEmployee(object input)

{

var empAsDictionary = (IDictionary<string, object>)input;

var id = (int)empAsDictionary["Id"];

var employeeEntry = context.Employees.SingleOrDefault(e => e.Id == id);

employeeEntry.Name = (string)empAsDictionary["Name"];

employeeEntry.Occupation = (string)empAsDictionary["Occupation"];

employeeEntry.Salary = (int)empAsDictionary["Salary"];

employeeEntry.City = (string)empAsDictionary["City"];

context.Entry(employeeEntry).State = System.Data.Entity.EntityState.Modified

return await context.SaveChangesAsync();

}

We will compose REST APIs using Express.js and call the above functions inside them. Before that, we need to make the compiled dll of the above class library available to the Node.js application. We can do it by building the class library project and copying the result dlls into a folder in the Node.js application.

Creating Node.js Application

Create a new folder in your system and name it ‘NodeEdgeSample’. Create a new folder ‘dlls’ inside it and copy the binaries of the class library project into this folder. You can open this folder using your favorite tool for Node.js. I generally use WebStorm and have started using Visual Studio Code these days.

Add package.json file to this project using “npm init” command (discussed in Understanding NPM article) and add the following dependencies to it:

"dependencies": {

"body-parser": "^1.13.2",

"edge": "^0.10.1",

"express": "^4.13.1"

}

Run NPM install to get these packages installed in the project. Add a new file to the project and name it ‘server.js’. This file will contain all of the Node.js code required for the application. First things first, let’s get references to all the packages and add the required middlewares to the Express.js pipeline. Following snippet does this:

var edge = require('edge');

var express = require('express');

var bodyParser = require('body-parser');

var app = express();

app.use('/', express.static(require('path').join(__dirname, 'scripts')));

app.use(bodyParser.urlencoded({ extended: true }));

app.use(bodyParser.json());

Now, let’s start adding the required Express REST APIs to the application. As already mentioned, the REST endpoints will interact with the compiled dll to achieve their functionality. The dll file can be referred using theedge.func function. If type and method are not specified, it defaults class name as Startup and method name asInvoke. Otherwise, we can override the class and method names using the properties in the object passed intoedge.func.

Following is the REST API that returns list of employees:

app.get('/api/employees', function (request, response) {

var getEmployeesProxy = edge.func({

assemblyFile: 'dlls\\EmployeeCRUD.dll',

typeName: 'EmployeeCRUD.EmployeesOperations',

methodName: 'GetEmployees'

});

getEmployeesProxy(null, apiResponseHandler(request, response));

});

The function apiResponseHandler is a curried generic method for all the three REST APIs. This function returns another function that is called automatically once execution of the .NET function is completed. Following is the definition of this function:

function apiResponseHandler(request, response) {

return function(error, result) {

if (error) {

response.status(500).send({error: error});

return;

}

response.send(result);

};

}

Implementation of REST APIs for add and edit are similar to the one above. The only difference is, they pass an input object to the proxy function.

app.post('/api/employees', function (request, response) {

var addEmployeeProxy = edge.func({

assemblyFile:"dlls\\EmployeeCRUD.dll",

typeName:"EmployeeCRUD.EmployeesOperations",

methodName: "AddEmployee"

});

addEmployeeProxy(request.body, apiResponseHandler(request, response));

});

app.put('/api/employees/:id', function (request, response) {

var editEmployeeProxy = edge.func({

assemblyFile:"dlls\\EmployeeCRUD.dll",

typeName:"EmployeeCRUD.EmployeesOperations",

methodName: "EditEmployee"

});

editEmployeeProxy(request.body, apiResponseHandler(request, response));

});

Consuming APIs on a Page

The final part of this tutorial is to consume these APIs on an HTML page. Add a new HTML page to the application and add bootstrap CSS and Angular.js to this file. This page will list all the employees and provide interfaces to add new employee and edit details of an existing employee. Following is the mark-up on the page:

<!doctype html>

<html>

<head>

<title>Edge.js sample</title>

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"/>

</head>

<body ng-app="edgeCrudApp">

<div class="container" ng-controller="EdgeCrudController as vm">

<div class="text-center">

<h1>Node-Edge-.NET CRUD Application</h1>

<hr/>

<div class="col-md-12">

<form name="vm.addEditEmployee">

<div class="control-group">

<input type="text" ng-model="vm.employee.Name" placeholder="Name" />

<input type="text" ng-model="vm.employee.Occupation" placeholder="Occupation" />

<input type="text" ng-model="vm.employee.Salary" placeholder="Salary" />

<input type="text" ng-model="vm.employee.City" placeholder="City" />

<input type="button" class="btn btn-primary" ng-click="vm.addOrEdit()" value="Add or Edit" />

<input type="button" class="btn" value="Reset" ng-click="vm.reset()" />

</div>

</form>

</div>

<br/>

<div class="col-md-10">

<table class="table">

<thead>

<tr>

<th style="text-align: center">Name</th>

<th style="text-align: center">Occupation</th>

<th style="text-align: center">Salary</th>

<th style="text-align: center">City</th>

<th style="text-align: center">Edit</th>

</tr>

</thead>

<tbody>

<tr ng-repeat="emp in vm.employees">

<td>{{emp.Name}}</td>

<td>{{emp.Occupation}}</td>

<td>{{emp.Salary}}</td>

<td>{{emp.City}}</td>

<td>

<button class="btn" ng-click="vm.edit(emp)">Edit</button>

</td>

</tr>

</tbody>

</table>

</div>

</div>

</div>

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.3/angular.min.js"></script>

<script src="app.js"></script>

</body>

</html>

Add a new folder to the application and name it ‘scripts’. Add a new JavaScript file to this folder and name it ‘app.js’. This file will contain the client side script of the application. Since we are building an Angular.js application, the file will have an Angular module with a controller and a service added to it. Functionality of the file includes:

  • Getting list of employees on page load
  • Adding an employee or, editing employee using the same form
  • Resetting the form to pristine state once the employee is added or, edited

Here’s the code for this file:

(function(){

var app = angular.module('edgeCrudApp', []);

app.controller('EdgeCrudController', function (edgeCrudSvc) {

var vm = this;

function getAllEmployees(){

edgeCrudSvc.getEmployees().then(function (result) {

vm.employees = result;

}, function (error) {

console.log(error);

});

}

vm.addOrEdit = function () {

vm.employee.Salary = parseInt(vm.employee.Salary);

if(vm.employee.Id) {

edgeCrudSvc.editEmployee(vm.employee)

.then(function (result) {

resetForm();

getAllEmployees();

}, function (error) {

console.log("Error while updating an employee");

console.log(error);

});

}

else{

edgeCrudSvc.addEmployee(vm.employee)

.then(function (result) {

resetForm();

getAllEmployees();

}, function (error) {

console.log("Error while inserting new employee");

console.log(error);

});

}

};

vm.reset= function () {

resetForm();

};

function resetForm(){

vm.employee = {};

vm.addEditEmployee.$setPristine();

}

vm.edit = function(emp){

vm.employee = emp;

};

getAllEmployees();

});

app.factory('edgeCrudSvc', function ($http) {

var baseUrl = '/api/employees';

function getEmployees(){

return $http.get(baseUrl)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

function addEmployee(newEmployee){

return $http.post(baseUrl, newEmployee)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

function editEmployee(employee){

return $http.put(baseUrl + '/' + employee.Id, employee)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

return {

getEmployees: getEmployees,

addEmployee: addEmployee,

editEmployee: editEmployee

};

});

}());

Save all the files and run the application. You should be able to add and edit employees. I am leaving the task of deleting employee as an assignment to the reader.

Conclusion

In general, it is challenging to make two different frameworks talk to each other. Edge.js takes away the pain of integrating two frameworks and provides an easier and cleaner way to take advantage of good features of .NET and Node.js together to build great applications. It aligns with the Node.js event loop model and respects execution model of the platform as well. Let’s thank Tomasz Jancjuk for his great work and use this tool effectively!

Download the entire source code of this article (Github)

Happy Coding Smile

How-to: node.js Logger–winston setup / configuration


I am working on node.js based REST API development and I am using this very popular npm middleware called “Winston” for logging errors, exceptions and even info & debug log too.

Why we need it?

Now you must be wondering why we need it since we already have console.log to check logs while development, BUT what happens when we decide to go live? How to access the logs? How to create different log levels?

Well, WINSTON is your answer… following is the setup and configuration.

Manual Logging

If you want to log only specific things that you want to track, there are easy libraries that let you do so. Winston is a very popular one. You can log your output to multiple transports at the same time (e.g: file, console…). It’s also supported by 3rd parties cloud services that will be happy to host those outputs for you (more about that next). 

Here is a simple snippet of how to configure winston and log it to both your console and a file: 

 

var winston = require('winston');

//log your output to a file also

winston.add(winston.transports.File, { filename: 'somefile.log' });

//log some outputs

winston.log('info', 'Hello distributed log files!');

winston.error('Who let the dogs out?!');

How if you want dedicated log files to have all log written to it? which is idle for production anyways –  Here is my setup

– Create folder “helpers” in your project & create “logger.js” in that folder.

– logger.js code

image

– Now in your server.js or app.js (your main node startup file) add this middleware declaration

var logger = require(‘./helpers/logger.js’);

– That’s it…. You are all set…verify by logging any simple information like –

logger.info(‘this is information’);

logger.debug(‘this is debug’);

logger.warn(‘this is warning’);

logger.error(‘this is error’);

Thanks to Winston development team & of course sweet node.js…

Happy Coding Smile

iisnode installation & configuration


Recently I am working on Node.js REST API Development & wanted to deploy that on IIS. So here is my configuration steps –

Hardware:

Windows 8 x64 bit

IIS 8.0

Software:

Download IISNode from https://github.com/tjanczuk/iisnode

image

Go to – https://github.com/azure/iisnode/wiki/iisnode-releases

image

Verify

open iis (Start->run->inetmgr)  & Go to Modules

image

You should see following iisnode modules now registered in iis

image

Then I created sample IISNode + Nodejs application. & hosted under defaults site in IIS as below. You should see iisnode modules in that new site as well, else it wont work.

image

Sample code web.config

<configuration>
  <system.webServer>
    <!– indicates that the hello.js file is a node.js application
    to be handled by the iisnode module –>
    <handlers>
      <add name=”iisnode” path=”server.js” verb=”*” modules=”iisnode” />
    </handlers>
    <rewrite>
      <rules>
        <rule name=”api”>
          <match url=”api/*” />
          <action type=”Rewrite” url=”server.js” />
        </rule>
      </rules>
    </rewrite>
    <directoryBrowse enabled=”false” />
    <iisnode
      devErrorsEnabled=”true”
      debuggingEnabled=”true”
      loggingEnabled=”false”
      debuggerPathSegment=”debug”
      nodeProcessCommandLine=”C:\Program Files (x86)\nodejs\node.exe”
      promoteServerVars =”APPL_MD_PATH”>
      <!– NOTE: promote server vars is for middleware : iis-baseUrl which is in middlewares folder–>
    </iisnode>
  </system.webServer>

  <system.web>
    <compilation debug=”true” />
  </system.web>
</configuration>


   

server.js code

var http = require(‘http’);
var port = process.env.port || 1337;
http.createServer(function (req, res) {
res.writeHead(200, { ‘Content-Type’: ‘text/plain’ });
res.end(‘Hello, world! [helloworld sample; iisnode version is ‘ + process.env.IISNODE_VERSION + ‘, node version is ‘ + process.version + ‘]’);
}).listen(port);

Hope this helps… Happy Coding Smile