Angular 2.0 Architechture


Starting new series – Learning Angular 2.0 on My Github –

https://github.com/bapatel1/Learning-Angular-2.0

##### angular 2.0 building blocks
– Module
– Component
– Template
– Metadata
– Data Binding
– Service
– Directive
– Dependency Injection

##### 1. Module
– Angular apps are modular and so in general we assemble our application from many modules. A typical module is a cohesive block of code dedicated to a single purpose.
– A module exports something of value in that code, typically one thing such as a class.
– Some modules are libraries of other modules.
– Angular itself ships as a collection of library modules called “barrels”. Each Angular library is actually a public facade over several logically related private modules.
– The angular2/core library is the primary Angular library module from which we get most of what we need.
– There are other important Angular library modules too such as angular2/common, angular2/router, and angular2/http.
– e.g import {Component} from ‘angular2/core’;
– The key take aways are:
    – Angular apps are composed of modules.
    – Modules export things — classes, function, values — that other modules import.
    – We prefer to write our application as a collection of modules, each module exporting one thing.
    – The first module we write will most likely export a component.

##### 2. Component
– Perhaps the first module we meet is a module that exports a component class. The component is one of the basic Angular blocks, we write a lot of them
– Most applications have an AppComponent. By convention, we’ll find it in a file named app.component.ts
– Components are the collection of templates, styles, selector configurations etc.
– Angular creates, updates, and destroys components as the user moves through the application.
– Each components is a typescript class which again includes variable, functions, prop declaration etc.

##### 3. Template
– We define a Component’s view with its companion template. A template is a form of HTML that tells Angular how to render the Component.
– A template looks like regular HTML much of the time … and then it gets a bit strange.
– Template will have all diff type databinding and html DOM information.

##### 4. Metadata
– Metadata tells Angular how to process a class.
– e.g
“`
@Component({
  selector:    ‘hero-list’,
  templateUrl: ‘app/hero-list.component.html’,
  directives:  [HeroDetailComponent],
  providers:   [HeroService]
})
“`
Here we see the @Component decorator which (no surprise) identifies the class immediately below it as a Component class.

selector – a css selector that tells Angular to create and insert an instance of this component where it finds a <hero-list> tag in parent HTML. If the template of the application shell (a Component) contained

templateUrl – the address of this component’s template

directives – an array of the Components or Directives that this template requires.

providers – an array of dependency injection providers for services that the component requires. This is one way to tell Angular that our component’s constructor requires a HeroService so it can get the list of heroes to display.

##### 5. Data Binding
– There are four forms of data binding syntax.
– Each form has a direction – to the DOM, from the DOM, or in both directions.
– We can group all bindings into three categories by the direction in which data flows. Each category has its distinctive syntax:
    – One Way (from component -> View)   – Binds Property, Attributes, Class, Style.
    “`
    {{expression}}
    [target] = “expression”
    bind-target = “expression”
    “`
    – One Way (from View -> Component) – Binds Events
    “`
    (target) = “Statement”
    on-target=”statement”
    “`
    – Two way binding
    “`
    [(target)] = “expression”
    bindon-target =”expression”
    “`
For More Details – https://angular.io/docs/ts/latest/guide/template-syntax.html

##### 6. Directive
– Our Angular templates are dynamic. When Angular renders them, it transforms the DOM according to the instructions given by a directive.
– A directive is a class with directive metadata. In TypeScript we’d apply the @Directive decorator to attach metadata to the class.
– While the component is technically a directive, it is so distinctive and central to Angular applications that we chose to separate the component from the directive in our architectural overview.
– There are two other kinds of directives as well that we call “structural” and “attribute” directives.
– Structural directives alter layout by adding, removing, and replacing elements in DOM.
    – e.g *ngFor , *ngIf etc.
– Attribute directives alter the appearance or behavior of an existing element. In templates they look like regular HTML attributes, hence the name.
    – e.g. [(ngModel)]

##### 7. Service
– “Service” is a broad category encompassing any value, function or feature that our application needs.
– Almost anything can be a service. A service is typically a class with a narrow, well-defined purpose. It should do something specific and do it well.
    – Examples include:
        – logging service
        – data service
        – message bus
        – tax calculator
        – application configuration
– Most common usage of services are to bind components with Databases or any similar part which delivers data to components and that way, components will be independent from Data Layer.

##### 8. DI (Dependency Injection)
– “Dependency Injection” is a way to supply a new instance of a class with the fully-formed dependencies it requires. Most dependencies are services. Angular uses dependency injection to provide new components with the services they need.
– dependency injection is wired into the framework and used everywhere.
– the Injector is the main mechanism.
    – an injector maintains a container of service instances that it created.
    – an injector can create a new service instance using a provider.
– a provider is a recipe for creating a service.
– we register providers with injectors.

##### 9. Other Stuff
– Animation: A forthcoming animation library makes it easy for developers to animate component behavior without deep knowledge of animation techniques or css.
– Bootstrapping: A method to configure and launch the root application component.
– Change Detection: Learn how Angular decides that a component property value has changed and when to update the screen
– Zones: Change Detection uses zones to intercept asynchronous activity and run its change detection strategies.
– Events: The DOM raises events. So can components and services.
– Router: With the Component Router service, users can navigate a multi-screen application in a familiar web browsing style using URLs.
– Forms: Support complex data entry scenarios with HTML-based validation and dirty checking.
– Http: Communicate with a server to get data, save data, and invoke server-side actions with this Angular HTTP client.
– Lifecycle Hooks: We can tap into key moments in the lifetime of a component, from its creation to its destruction, by implementing the “Lifecycle Hook” interfaces.
– Pipes: Services that transform values for display. We can put pipes in our templates to improve the user experience.
    “`
    price | currency:’USD’:true
    “`
– Testing: Angular provides a testing library for “unit testing” our application parts as they interact with the Angular framework.

### Author
Bhavin Patel

Happy Coding Smile

using Edge.js to combine node.js with C#


Getting Familiar with Edge.js

To bring .NET and Node.js together, Edge.js has some pre-requisites. It runs on .NET 4.5, so you must have .NET 4.5 installed. As Node.js treats all I/O and Network calls as slower operations, Edge.js assumes that the .NET routine to be called is a slower operation and handles it asynchronously. The .NET function to be called has to be an asynchronous function as well.

The function is assigned to a delegate of type Func<object, Task<object>>. This means, the function is an asynchronous one that can take any type of argument and return any type of value. Edge.js takes care of converting the data from .NET type to JSON type and vice-versa. Because of this process of marshalling and unmarshalling, the .NET objects should not have circular references. Presence of circular references may lead to infinite loops while converting the data from one form to the other.

Hello World using Edge

Edge.js can be added to a Node.js application through NPM. Following is the command to install the package and save it to package.json file:

> npm install edge --save

The edge object can be obtained in a Node.js file as:

var edge = require('edge');

The edge object can accept inline C# code, read code from a .cs or .csx file, and also execute the code from a compiled dll. We will see all of these approaches.

To start with, let’s write a “Hello world” routine inline in C# and call it using edge. Following snippet defines the edge object with inline C# code:

var helloWorld = edge.func(function () {

/*async(input) => {

return "Hurray! Inline C# works with edge.js!!!";

}*/

});

The asynchronous and anonymous C# function passed in the above snippet is compiled dynamically before calling it. The inline code has to be passed as a multiline comment. The method edge.func returns a proxy function that internally calls the C# method. So the C# method is not called till now. Following snippet calls the proxy:

helloWorld(null, function(error, result) {

if (error) {

console.log("Error occured.");

console.log(error);

return;

}

console.log(result);

});

In the above snippet, we are passing a null value to first parameter of the proxy as we are not using the input value. The callback function is similar to any other callback function in Node.js accepting error and result as parameters.

We can rewrite the same Edge.js proxy creation by passing the C# code in the form of a string instead of a multiline comment. Following snippet shows this:

var helloWorld = edge.func(

'async(input) => {'+

'return "Hurray! Inline C# works with edge.js!!!";'+

'}'

);

We can pass a class in the snippet and call a method from the class as well. By convention, name of the class should be Startup and name of the method should be Invoke. The Invoke method will be attached to a delegate of type Func<object, Task<object>>. The following snippet shows usage of class:

var helloFromClass = edge.func(function () {

/*

using System.Threading.Tasks;

public class Startup

{       

public async Task<object> Invoke(object input)

{

return "Hurray! Inline C# class works with edge.js!!!";

}

} */

});

It can be invoked the same way we did previously:

helloFromClass(10, function (error, result) {

if(error){

console.log("error occured...");

console.log(error);

return;

}

console.log(result);

});

A separate C# file

Though it is possible to write the C# code inline, being developers, we always want to keep the code in a separate file for better organization of the code. By convention, this file should have a class called Startup with the method Invoke. The Invoke method will be added to the delegate of type Func<object, Task<object>>.

Following snippet shows content in a separate file, Startup.cs:

using System.Threading.Tasks;

public class Startup

{

public async Task<object> Invoke(object input)

{

return new Person(){

Name="Alex",

Occupation="Software Professional",

Salary=10000,

City="Tokyo"

};

}

}

public class Person{

public string Name { get; set; }

public string Occupation { get; set; }

public double Salary { get; set; }

public string City { get; set; }

}

Performing CRUD Operations on SQL Server

Now that you have a basic idea of how Edge.js works, let’s build a simple application that performs CRUD operations on a SQL Server database using Entity Framework and call this functionality from Node.js. As we will have a considerable amount of code to setup Entity Framework and perform CRUD operations in C#, let’s create a class library and consume it using Edge.js.

Creating Database and Class Library

As a first step, create a new database named EmployeesDB and run the following commands to create the employees table and insert data into it:

CREATE TABLE Employees(

Id INT IDENTITY PRIMARY KEY,

Name VARCHAR(50),

Occupation VARCHAR(20),

Salary INT,

City VARCHAR(50)

);

INSERT INTO Employees VALUES

('Ravi', 'Software Engineer', 10000, 'Hyderabad'),

('Rakesh', 'Accountant', 8000, 'Bangalore'),

('Rashmi', 'Govt Official', 7000, 'Delhi');

Open Visual Studio, create a new class library project named EmployeesCRUD and add a new Entity Data Model to the project pointing to the database created above. To make the process of consuming the dll in Edge.js easier, let’s assign the connection string inline in the constructor of the context class. Following is the constructor of context class that I have in my class library:

public EmployeesModel()

: base("data source=.;initial catalog=EmployeesDB;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework;")

{

}

Add a new class to the project and name it EmployeesOperations.cs. This file will contain the methods to interact with Entity Framework and perform CRUD operations on the table Employees. As a best practice, let’s implement the interface IDisposable in this class and dispose the context object in the Dispose method. Following is the basic setup in this class:

public class EmployeesOperations : IDisposable

{

EmployeesModel context;

public EmployeesOperations()

{

context = new EmployeesModel();

}

public void Dispose()

{

context.Dispose();

}

}

As we will be calling methods of this class directly using Edge.js, the methods have to follow signature of the delegate that we discussed earlier. Following is the method that gets all employees:

public async Task<object> GetEmployees(object input)

{

return await context.Employees.ToListAsync();

}

There is a challenge with the methods performing add and edit operations, as we need to convert the input data from object to Employee type. This conversion is not straight forward, as the object passed into the .NET function is a dynamic expando object. We need to convert the object into a dictionary object and then read the values using property names as keys. Following method performs this conversion before inserting data into the database:

public async Task<object> AddEmployee(object emp)

{

var empAsDictionary = (IDictionary<string, object>)emp;

var employeeToAdd = new Employee() {

Name = (string)empAsDictionary["Name"],

City = (string)empAsDictionary["City"],

Occupation = (string)empAsDictionary["Occupation"],

Salary = (int)empAsDictionary["Salary"]

};

var addedEmployee = context.Employees.Add(employeeToAdd);

await context.SaveChangesAsync();

return addedEmployee;

}

The same rule applies to the edit method as well. It is shown below:

public async Task<object> EditEmployee(object input)

{

var empAsDictionary = (IDictionary<string, object>)input;

var id = (int)empAsDictionary["Id"];

var employeeEntry = context.Employees.SingleOrDefault(e => e.Id == id);

employeeEntry.Name = (string)empAsDictionary["Name"];

employeeEntry.Occupation = (string)empAsDictionary["Occupation"];

employeeEntry.Salary = (int)empAsDictionary["Salary"];

employeeEntry.City = (string)empAsDictionary["City"];

context.Entry(employeeEntry).State = System.Data.Entity.EntityState.Modified

return await context.SaveChangesAsync();

}

We will compose REST APIs using Express.js and call the above functions inside them. Before that, we need to make the compiled dll of the above class library available to the Node.js application. We can do it by building the class library project and copying the result dlls into a folder in the Node.js application.

Creating Node.js Application

Create a new folder in your system and name it ‘NodeEdgeSample’. Create a new folder ‘dlls’ inside it and copy the binaries of the class library project into this folder. You can open this folder using your favorite tool for Node.js. I generally use WebStorm and have started using Visual Studio Code these days.

Add package.json file to this project using “npm init” command (discussed in Understanding NPM article) and add the following dependencies to it:

"dependencies": {

"body-parser": "^1.13.2",

"edge": "^0.10.1",

"express": "^4.13.1"

}

Run NPM install to get these packages installed in the project. Add a new file to the project and name it ‘server.js’. This file will contain all of the Node.js code required for the application. First things first, let’s get references to all the packages and add the required middlewares to the Express.js pipeline. Following snippet does this:

var edge = require('edge');

var express = require('express');

var bodyParser = require('body-parser');

var app = express();

app.use('/', express.static(require('path').join(__dirname, 'scripts')));

app.use(bodyParser.urlencoded({ extended: true }));

app.use(bodyParser.json());

Now, let’s start adding the required Express REST APIs to the application. As already mentioned, the REST endpoints will interact with the compiled dll to achieve their functionality. The dll file can be referred using theedge.func function. If type and method are not specified, it defaults class name as Startup and method name asInvoke. Otherwise, we can override the class and method names using the properties in the object passed intoedge.func.

Following is the REST API that returns list of employees:

app.get('/api/employees', function (request, response) {

var getEmployeesProxy = edge.func({

assemblyFile: 'dlls\\EmployeeCRUD.dll',

typeName: 'EmployeeCRUD.EmployeesOperations',

methodName: 'GetEmployees'

});

getEmployeesProxy(null, apiResponseHandler(request, response));

});

The function apiResponseHandler is a curried generic method for all the three REST APIs. This function returns another function that is called automatically once execution of the .NET function is completed. Following is the definition of this function:

function apiResponseHandler(request, response) {

return function(error, result) {

if (error) {

response.status(500).send({error: error});

return;

}

response.send(result);

};

}

Implementation of REST APIs for add and edit are similar to the one above. The only difference is, they pass an input object to the proxy function.

app.post('/api/employees', function (request, response) {

var addEmployeeProxy = edge.func({

assemblyFile:"dlls\\EmployeeCRUD.dll",

typeName:"EmployeeCRUD.EmployeesOperations",

methodName: "AddEmployee"

});

addEmployeeProxy(request.body, apiResponseHandler(request, response));

});

app.put('/api/employees/:id', function (request, response) {

var editEmployeeProxy = edge.func({

assemblyFile:"dlls\\EmployeeCRUD.dll",

typeName:"EmployeeCRUD.EmployeesOperations",

methodName: "EditEmployee"

});

editEmployeeProxy(request.body, apiResponseHandler(request, response));

});

Consuming APIs on a Page

The final part of this tutorial is to consume these APIs on an HTML page. Add a new HTML page to the application and add bootstrap CSS and Angular.js to this file. This page will list all the employees and provide interfaces to add new employee and edit details of an existing employee. Following is the mark-up on the page:

<!doctype html>

<html>

<head>

<title>Edge.js sample</title>

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"/>

</head>

<body ng-app="edgeCrudApp">

<div class="container" ng-controller="EdgeCrudController as vm">

<div class="text-center">

<h1>Node-Edge-.NET CRUD Application</h1>

<hr/>

<div class="col-md-12">

<form name="vm.addEditEmployee">

<div class="control-group">

<input type="text" ng-model="vm.employee.Name" placeholder="Name" />

<input type="text" ng-model="vm.employee.Occupation" placeholder="Occupation" />

<input type="text" ng-model="vm.employee.Salary" placeholder="Salary" />

<input type="text" ng-model="vm.employee.City" placeholder="City" />

<input type="button" class="btn btn-primary" ng-click="vm.addOrEdit()" value="Add or Edit" />

<input type="button" class="btn" value="Reset" ng-click="vm.reset()" />

</div>

</form>

</div>

<br/>

<div class="col-md-10">

<table class="table">

<thead>

<tr>

<th style="text-align: center">Name</th>

<th style="text-align: center">Occupation</th>

<th style="text-align: center">Salary</th>

<th style="text-align: center">City</th>

<th style="text-align: center">Edit</th>

</tr>

</thead>

<tbody>

<tr ng-repeat="emp in vm.employees">

<td>{{emp.Name}}</td>

<td>{{emp.Occupation}}</td>

<td>{{emp.Salary}}</td>

<td>{{emp.City}}</td>

<td>

<button class="btn" ng-click="vm.edit(emp)">Edit</button>

</td>

</tr>

</tbody>

</table>

</div>

</div>

</div>

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.3/angular.min.js"></script>

<script src="app.js"></script>

</body>

</html>

Add a new folder to the application and name it ‘scripts’. Add a new JavaScript file to this folder and name it ‘app.js’. This file will contain the client side script of the application. Since we are building an Angular.js application, the file will have an Angular module with a controller and a service added to it. Functionality of the file includes:

  • Getting list of employees on page load
  • Adding an employee or, editing employee using the same form
  • Resetting the form to pristine state once the employee is added or, edited

Here’s the code for this file:

(function(){

var app = angular.module('edgeCrudApp', []);

app.controller('EdgeCrudController', function (edgeCrudSvc) {

var vm = this;

function getAllEmployees(){

edgeCrudSvc.getEmployees().then(function (result) {

vm.employees = result;

}, function (error) {

console.log(error);

});

}

vm.addOrEdit = function () {

vm.employee.Salary = parseInt(vm.employee.Salary);

if(vm.employee.Id) {

edgeCrudSvc.editEmployee(vm.employee)

.then(function (result) {

resetForm();

getAllEmployees();

}, function (error) {

console.log("Error while updating an employee");

console.log(error);

});

}

else{

edgeCrudSvc.addEmployee(vm.employee)

.then(function (result) {

resetForm();

getAllEmployees();

}, function (error) {

console.log("Error while inserting new employee");

console.log(error);

});

}

};

vm.reset= function () {

resetForm();

};

function resetForm(){

vm.employee = {};

vm.addEditEmployee.$setPristine();

}

vm.edit = function(emp){

vm.employee = emp;

};

getAllEmployees();

});

app.factory('edgeCrudSvc', function ($http) {

var baseUrl = '/api/employees';

function getEmployees(){

return $http.get(baseUrl)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

function addEmployee(newEmployee){

return $http.post(baseUrl, newEmployee)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

function editEmployee(employee){

return $http.put(baseUrl + '/' + employee.Id, employee)

.then(function (result) {

return result.data;

}, function (error) {

return error;

});

}

return {

getEmployees: getEmployees,

addEmployee: addEmployee,

editEmployee: editEmployee

};

});

}());

Save all the files and run the application. You should be able to add and edit employees. I am leaving the task of deleting employee as an assignment to the reader.

Conclusion

In general, it is challenging to make two different frameworks talk to each other. Edge.js takes away the pain of integrating two frameworks and provides an easier and cleaner way to take advantage of good features of .NET and Node.js together to build great applications. It aligns with the Node.js event loop model and respects execution model of the platform as well. Let’s thank Tomasz Jancjuk for his great work and use this tool effectively!

Download the entire source code of this article (Github)

Happy Coding Smile

C# POCO Gen


POCO Generator

By: Yuvalsol

I was reading recently about t4templates and poco generation in C# and came across this nice article.

POCO generating application for SQL Server

Download source – 63 KB

Download POCO Generator – 57.9 KB

Introduction

There are plenty of ways to generate POCO classes from a database. The hard way is to handwrite them. This may be good for introductory/one-or-two classes scenarios but is not applicable for production. There are codegen tools, like CodeSmith. The tool detects changes to the database and generates the appropriate POCO classes. There are script tools like T4 (Text Template Transformation Toolkit), which Visual Studio supports. The solution that I opt to create is a stand-alone application, the POCO Generator, that traverses the SQL Server, and generates POCOs from various data objects. There are 5 types of database objects that the POCO Generator can handle:

  • Tables
  • Views
  • Stored Procedures
  • Table-valued Functions
  • User-Defined Table Types (TVP)

The first part of this article will describe how to use the POCO Generator. The second part will detail the implementation of retrieving the schema of the various data objects.

SQL Server Connection

The first window you’ll see is the SQL Server connection window. The Server, Authentication, Login & Password text boxes are pretty much self-explanatory. If the All checkbox is checked, the application will traverse over all the databases in the specified SQL Server instance. However, the Database dropdown allows to pick a specific database. Click on the Refresh button to pull all the database names from the SQL Server, uncheck the All checkbox and pick a database. The constructed connection string will appear in the Connection String textbox while you fill the various textboxes. If the checkbox next to it is checked, the textbox will be enabled and it will allow you to write a connection string directly. Once you are ready, hit the Connect button.

POCO Generator

The SQL Server tree lists all the databases on that instance and each database lists its data objects – tables, views, procedures, functions & TVPs. The checkboxes on the tree are for picking specific objects for exporting to files. The upper right side of the window shows the current generated POCO, based on what is selected in the tree. The panel at the bottom lets you manipulate how the POCO looks and handles exporting to files. As you change these options, the POCO panel will be refreshed and you’ll see immediately how the POCO looks.

POCO

The POCO section manages the structure of the POCO.

  • Properties/Data Members – Normally, a POCO is constructed with properties but this option gives an option to use data members instead.
  • Virtual Properties – Adds a virtual modifier to the properties.
  • Partial Class – Adds a partial modifier to the class.
  • Struct Types Nullable – All the struct types will become nullable (int?, DateTime?) even if they are not nullable in the database.
  • Comments & Without null – A comment, for each property, of the original SQL Server type and whether it is nullable. Without null removes the nullable comments.
  • using – Adds using statements at the beginning of the POCO.
  • Namespace – Wraps the POCO with the specified namespace.
Class Name

By default, the name of the POCO class is the name of the data object, whether it is a C# valid name or not. The Class Name section manipulates that name.

  • Singular – Changes the name from plural to singular. Applicable only for tables, views & TVPs. I tried to do my best here, working with the singular rules of English grammar, but obviously it’s not fool-proof.
  • Include DB – Adds the database name.
  • DB Separator – Adds the specified separator after the database name.
  • Include Schema – Adds the schema name.
  • Ignore dbo Schema – If the schema name is “dbo“, doesn’t add the schema name.
  • Schema Separator – Adds the specified separator after the schema name.
  • Words Separator – Adds the specified separator between words in the class name. Word are defined as text between underscores or in a camel case.

    The class name EmployeeDepartmentHistory has 3 words in it, Employee, Department & History. The class name Product_Category has 2 words, Product & Category.

  • CamelCase, UPPER CASE, lower case – Changes the case of the class name.
  • Replace, With, Ignore Case – Performs a search and replace on the class name.
  • Fixed Name – Ignores all the previous options and set the name of the class to the specified fixed name.
  • Prefix & Suffix – Adds prefix and suffix texts to the class name.
ORM Annotations

ORM Annotations section adds various ORM attributes to the POCO class and its properties. Applicable only for tables. I tried not to go overboard here and picked only two ORMs. I also tried to implement only the most necessary attributes for each ORM.

  • EF Code-First – Adds Entity Framework Code-First attributes.
    • Table attribute on the class declaration. [Table("Production.Product")]
    • Key attribute on primary key properties. [Key]
    • Column attribute for composite primary key properties with the Order value set to the order of the key in the composite primary key. [Column(Order = 1)]
    • MaxLength attribute on string properties. [MaxLength(50)]
    • Timestamp attribute on timestamp properties. [Timestamp]
  • Column – Adds Column attribute, with Name and TypeName values, for each property. [Column(Name = "ProductID", TypeName = "int")]
  • Required – Adds Required attribute for properties that are not nullable. [Required]
  • PetaPoco – Adds PetaPoco attributes.
    PetaPoco.TableName attribute on the class declaration.[PetaPoco.TableName("Production.Product")]
    PetaPoco.PrimaryKey attribute on the class declaration. [PetaPoco.PrimaryKey("ProductID")]
  • Explicit Columns – Adds PetaPoco.ExplicitColumns attribute on the class declaration and addsPetaPoco.Column attribute for each property. [PetaPoco.ExplicitColumns] [PetaPoco.Column]
Export to Files

Exports one or more POCOs to one or more files.

  • Folder – Specifies the folder to export to.
  • Append to File – Useful if you want to export multiple POCOs to a single file. If this option is not checked, the POCO Generator will export each POCO to a different file.
  • Export Button – If there are checked checkboxes on the SQL Server tree, the POCO Generator will export just them. Otherwise, it will export the current selected POCO.
Other Buttons
  • Copy Button – Copies the current selected POCO to the clipboard.
  • Type Mapping – A popup of SQL Server to .NET type mapping.

SQL Server Data Type Mappings

Filter Results

You can filter the results in each group (Tables, Views, …) by right-click on a group and choose Filter from the context menu. In the filter popup choose what name and what schema you want to include or exclude.

Stored Procedures with Many Result Sets

There is no way to determine if a stored procedure returns more than one result set. During the process of retrieving the schema of a stored procedure, only the first result set is returned. There is no way to get to the schema of any result set after the first one.

The “solution” to this problem is more of a hack than anything else. In the stored procedure, remark the first select query and alter the stored procedure. Then, go to the UI and right-click on the stored procedure. Click on Refresh from the context menu. Once the new POCO shows up, copy it or export it for further use. Continue with this process up to the last result set. When you’re done, undo the remarks and restore the stored procedure.

To Do List

There are things that I didn’t implement. I felt they are too much for what is otherwise no more than an educational tool, at least for me. And of course, they take time and effort to do, which at this point I don’t have. The first thing that comes to mind is command line feature. With command line capabilities, POCO Generator can be integrated into the build process. I also wanted to make it database-blind, meaning it won’t target just SQL Server. Another feature to do is a plugin feature which can extend the number of ORMs the POCO Generator can support. Another thing that I wanted to do is to support foreign keys between POCOs, e.g. tables that act as header and lines. In EF, a foreign key will look something like this:

Hide Copy Code

public class Header
{
    [Key]
    public int HeaderId { get; set; }

    public ICollection<Line> Lines { get; set; }
}

public class Line
{
    [Key]
    public int LineId { get; set; }

    [ForeignKey("HeaderId")]
    public Header ParentHeader { get; set; }
}

Schemas

The process of retrieving schema of SQL Server data objects is mainly done through GetSchema methods fromDbConnection class. The class DbConnection, which SqlConnection inherits from, has several GetSchemamethods which do exactly as their name suggests. They return the schema information from the specified data source. You can pass, to the GetSchema method, the type of object that you’re looking for and list of restrictions which are usually used to filter on database name, schema name and the name of the object. A full list of object types and restricts can be found on these MSDN pages. Schema Collections and Schema Restrictions.

Tables & Views

The schema type for both tables and views is “Tables“. For tables, put the string BASE TABLE” on the last restriction which is a table type restriction. For views, put the string VIEW” on the table type restriction.

Tables:

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    connection.Open();
    DataTable allTables = connection.GetSchema("Tables", 
        new string[] { database_name, null, null, "BASE TABLE" });
    DataTable specificTable = connection.GetSchema("Tables", 
        new string[] { database_name, schema_name, table_name, "BASE TABLE" });
}

and Views:

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    connection.Open();
    DataTable allViews = connection.GetSchema("Tables", 
        new string[] { database_name, null, null, "VIEW" });
    DataTable specificView = connection.GetSchema("Tables", 
        new string[] { database_name, schema_name, view_name, "VIEW" });
}

User-Defined Table Types (TVP)

TVP schema can’t be retrieved through GetSchema methods or at least not retrieved reliably. Getting TVP schemas require a little querying on the SQL Server side. This first query gets all the TVPs on the database.

Hide Copy Code

select 
    tvp_schema = ss.name, 
    tvp_name = stt.name, 
    stt.type_table_object_id 
from sys.table_types stt 
inner join sys.schemas ss on stt.schema_id = ss.schema_id

and for each TVP, we get its list of columns. @tvp_id parameter is the type_table_object_id column from the previous query.

Hide Copy Code

select 
    sc.*, 
    data_type = st.name 
from sys.columns sc 
inner join sys.types st on sc.system_type_id = st.system_type_id and sc.user_type_id = st.user_type_id
where sc.object_id = @tvp_id

Stored Procedures & Table-valued Functions

The schema type for both stored procedures and functions is “Procedures“. For stored procedures, put thestring PROCEDURE” on the last restriction which is a routine type restriction. For functions, put the stringFUNCTION” on the routine type restriction.

Stored Procedures:

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    connection.Open();
    DataTable allProcedures = connection.GetSchema("Procedures", 
        new string[] { database_name, null, null, "PROCEDURE" });
    DataTable specificProcedure = connection.GetSchema("Procedures", 
        new string[] { database_name, schema_name, procedure_name, "PROCEDURE" });
}

and Functions:

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    connection.Open();
    DataTable allFunctions = connection.GetSchema("Procedures", 
        new string[] { database_name, null, null, "FUNCTION" });
    DataTable specificFunction = connection.GetSchema("Procedures", 
        new string[] { database_name, schema_name, function_name, "FUNCTION" });
}

For each routine, we need to get its parameters. The schema type is “ProcedureParameters“.

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    connection.Open();
    DataTable routineParameters = connection.GetSchema("ProcedureParameters", 
        new string[] { database_name, routine_schema, routine_name, null });
}

At this point, we can filter out anything that is not a Table-valued function, meaning we need to remove Scalar functions. A scalar function has a single return parameter which is the result of the function and that’s how we find them.

Once we have the routine parameters, we will build an empty SqlParameter for each one. An emptySqlParameter is a parameter with DBNull.Value set as its value. For a TVP parameter, we will build a parameter with SqlDbType.Structured type and an empty DataTable as its value.

This is a very abridged code snippet of how a SqlParameter is built.

Hide Shrink Copy Code

SqlParameter sqlParameter = new SqlParameter();

// name
sqlParameter.ParameterName = parameter_name;

// empty value
sqlParameter.Value = DBNull.Value;

// type
switch (data_type)
{
    case "bigint": sqlParameter.SqlDbType = SqlDbType.BigInt; break;
    case "binary": sqlParameter.SqlDbType = SqlDbType.VarBinary; break;
    ....
    case "varchar": sqlParameter.SqlDbType = SqlDbType.VarChar; break;
    case "xml": sqlParameter.SqlDbType = SqlDbType.Xml; break;
}

// size for string type
// character_maximum_length comes from the parameter schem
if (data_type == "binary" || data_type == "char" || 
data_type == "nchar" || data_type == "nvarchar" || 
data_type == "varbinary" || data_type == "varchar")
{
    if (character_maximum_length == -1 || character_maximum_length > 0)
        sqlParameter.Size = character_maximum_length;
}

// direction
if (parameter_mode == "IN")
    sqlParameter.Direction = ParameterDirection.Input;
else if (parameter_mode == "INOUT")
    sqlParameter.Direction = ParameterDirection.InputOutput;
else if (parameter_mode == "OUT")
    sqlParameter.Direction = ParameterDirection.Output;

Now, we are ready to get the columns of the routine. When it comes to routines, we will useSqlDataReader.GetSchemaTable() method to get the routine schema withCommandBehavior.SchemaOnly flag.

For stored procedures, we can use CommandType.StoredProcedure.

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    using (SqlCommand command = new SqlCommand())
    {
        command.Connection = connection;
        command.CommandText = string.Format("[{0}].[{1}]", routine_schema, routine_name);
        command.CommandType = CommandType.StoredProcedure;

        // for each routine parameter, build it and add it to command.Parameters

        using (SqlDataReader reader = command.ExecuteReader(CommandBehavior.SchemaOnly))
        {
            DataTable schemaTable = reader.GetSchemaTable();
        }
    }
}

and for Table-valued functions, we need to construct a query that selects all the columns from the function.

Hide Copy Code

using (SqlConnection connection = new SqlConnection(connectionString))
{
    using (SqlCommand command = new SqlCommand())
    {
        command.Connection = connection;
        command.CommandType = CommandType.Text;

        command.CommandText = string.Format("select * from [{0}].[{1}](", routine_schema, routine_name);
        
        // for each routine parameter, build it and add it 
        // to command.Parameters and add its name to command.CommandText
        
        command.CommandText += ")";

        using (SqlDataReader reader = command.ExecuteReader(CommandBehavior.SchemaOnly))
        {
            DataTable schemaTable = reader.GetSchemaTable();
        }
    }
}

Happy Coding Smile

Setting up PayPal Instant Payment Notification(IPN) with C# – CodeProject


Article Link : http://www.codeproject.com/Tips/84538/Setting-up-PayPal-Instant-Payment-Notification-IPN

<%@ Page Language="C#"    %>
<%@ Import Namespace =  "System"%>
<%@ Import Namespace =  "System.IO"%>
<%@ Import Namespace =  "System.Text"  %>
<%@ Import Namespace =  "System.Net"  %>
<%@ Import Namespace =  "System.Web"  %>
<%@ Import Namespace =	"System.Net.Mail" %>

<script Language="JavaScript">
//Some JavaScript you may need goes here
</script>

<script Language="C#" Option="Explicit"  runat="server">

void Send_download_link (string from,  string to, string subject, string body)   
{		
   try
   {  // Construct a new e-mail message 
      SmtpClient client = new SmtpClient (smtpClient);
      client.Send (from, to, subject, body);
   } 
   catch (SmtpException ex)
   {
      debuggy.Text = "Send_download_link: " + ex.Message;

   } 
} // --- end of Send_download_link --

protected void Page_Load(object sender, EventArgs e)
{

	//Post back to either sandbox or live
	string strSandbox = "https://www.sandbox.paypal.com/cgi-bin/webscr";
	string strLive = "https://www.paypal.com/cgi-bin/webscr";
	HttpWebRequest req = (HttpWebRequest)WebRequest.Create(strSandbox);
	//Set values for the request back
	req.Method = "POST";
	req.ContentType = "application/x-www-form-urlencoded";
	byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength);
	string strRequest = Encoding.ASCII.GetString(param);
	string strResponse_copy = strRequest;  //Save a copy of the initial info sent by PayPal
	strRequest += "&cmd=_notify-validate";
	req.ContentLength = strRequest.Length;

	//for proxy
	//WebProxy proxy = new WebProxy(new Uri("http://url:port#"));
	//req.Proxy = proxy;
	//Send the request to PayPal and get the response
	StreamWriter streamOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII);
	streamOut.Write(strRequest);
	streamOut.Close();
	StreamReader streamIn = new StreamReader(req.GetResponse().GetResponseStream());
	string strResponse = streamIn.ReadToEnd();
	streamIn.Close();

	if (strResponse == "VERIFIED")
	{
		//check the payment_status is Completed
		//check that txn_id has not been previously processed
		//check that receiver_email is your Primary PayPal email
		//check that payment_amount/payment_currency are correct
		//process payment

	        // pull the values passed on the initial message from PayPal

		  NameValueCollection these_argies = HttpUtility.ParseQueryString(strResponse_copy);
		  string user_email = these_argies["payer_email"];
		  string pay_stat = these_argies["payment_status"];
		  //.
                  //.  more args as needed look at the list from paypal IPN doc
                  //.

                if(pay_stat.Equals("Completed") )
		{
			Send_download_link ("yours_truly@mycompany.com",  user_email, "Your order","Thanks for your order this the downnload link ... blah blah blah" );
		}		

		// more checks needed here specially your account number and related stuff
	}
	else if (strResponse == "INVALID")
	{
		//log for manual investigation
	}
	else
	{
		//log response/ipn data for manual investigation
	}
}  // --- end of page load --

</script>

<html>
<head runat="server" />
<title>IPN PayPal</title>
<asp:label id="debuggy" runat="server"/>
<h2> my test page</h2>
Load this first to check the syntax of your page
</body>
</html>
	
Thanks to Author: becker666

Enterprise Library Webcast Series


Very nice Enterprise Library webcasts series … These webcasts series are from Microsoft team

******************************

http://blogs.msdn.com/msdnwebcasts/archive/2005/03/02/382661.aspx

******************************

Enterprise Library 4.0 Data Access Application Block ( DAAB ) and Unity IoC Screencast

http://www.pnpguidance.net/Screencast/EnterpriseLibrary4DataAccessApplicationBlockDAABUnityIoCScreencast.aspx

Enterprise Library 4.0 Logging Application Block with Unity IoC Integration Screencas

http://www.pnpguidance.net/Screencast/EnterpriseLibrary4LoggingApplicationBlockUnityIoCScreencast.aspx

Enterprise Library 4.0 Validation Application Block and Unity IoC Screencast

http://www.pnpguidance.net/Screencast/EnterpriseLibrary4ValidationApplicationBlockUnityIoCScreencast.aspx

Ready for ASP.NET and JQuery ???


JQuery is an open source JavaScript library that has a passionate following among Ajax developers. Microsoft is integrating the open source JQuery library into both the ASP.NET Web Forms and ASP.NET MVC frameworks and providing full product support. Learn how you can take advantage of JQuery to build richly interactive client-side Ajax applications when developing either ASP.NET Web Forms or ASP.NET MVC applications. Also see how JQuery works in combination with ASP.NET AJAX to provide the best framework for building Ajax applications.

http://channel9.msdn.com/pdc2008/PC31/

Community For MVC.Net (Some nice Webcasts and Presentations)


Community For MVC.Net

http://www.c4mvc.net/Home/Events

About

Our Vision is to share knowledge and best practices around using the framework, as well as encourage community contributions that serve the greater good.

Through interactive sessions, projects, and aggregated blog feeds we hope to provide:

  • A way for new developers to use the framework the right way.
  • Techniques and tools to allow experienced developers to develop with less friction.
  • A self supporting community.

Downloads

Templates for MVC projects can be downloaded from http://downloads.c4mvc.net

Great WCF Webcasts series from Das Blonde ….(1 to 15)


· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 01 of 15): Overview

Monday, July 02, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344312&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 02 of 15): Contracts

Monday, July 09, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344314&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 03 of 15): Contract Versioning

Wednesday, July 11, 2007 10:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344318&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 04 of 15): Exceptions and Faults

Friday, July 13, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344322&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 05 of 15): Bindings

Monday, July 23, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344330&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 06 of 15): Hosting

Wednesday, July 25, 2007 10:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344338&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 07 of 15): Messaging Patterns

Friday, August 10, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344342&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 08 of 15): Instancing Modes

Monday, August 13, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344344&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 09 of 15): Concurrency, Throughput, and Throttling

Wednesday, August 15, 2007 10:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344346&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 10 of 15): Security Fundamentals

Friday, August 24, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344348&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 11 of 15): Federated Security

Monday, August 27, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344351&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 12 of 15): Reliable Messaging

Wednesday, August 29, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344353&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 13 of 15): Transactions

Monday, September 03, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344355&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 14 of 15): Message Queuing

Wednesday, September 05, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344357&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 15 of 15): Extensibility

Friday, September 07, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344359&Culture=en-US

Silverlight 2 June Webcast Series


Lindsay Rutter is going to be doing a series of webcasts on Silverlight 2 in June, starting on June 16th. The topics will include learning about deep zoom, learning the WPF UI framework, learning about adaptive streaming, and more. In total, there will be 6 webcasts in all. To register for any of these free webcasts, just click on its title in the list below below:

“Silverlight” Webcasts from Tim Sneath


http://blogs.msdn.com/tims/archive/2007/04/30/silverlight-screencasts.aspx

To coincide with the launch of the Silverlight 1.0 Beta, my team has been working hard on a great series of intermediate-level screencasts on Silverlight that are just hitting the wires now. Each video is about five minutes in length, and covers a “how to” topic.

“Silverlight” webcasts from Mike Ormond


Silverlight Webcasts

Just noticed that we’re running a series of Silverlight webcasts for both designers and developers. They’ve started already but you can get the previous ones “on-demand”. The developer series starts next Monday.

Here’s the link to the main page

Designers

On-Demand Webcasts

Developers

Light up with “SILVERLIGHT” (ASP.NET)


  1. http://www.microsoft.com/events/series/silverlight.aspx?tab=Webcasts&seriesid=116&webcastid=4308
  2. http://www.microsoft.com/events/series/silverlight.aspx?tab=Webcasts&seriesid=116&webcastid=4312
  3. http://www.microsoft.com/events/series/silverlight.aspx?tab=Webcasts&seriesid=116&webcastid=4311
  4. http://www.microsoft.com/events/series/silverlight.aspx?tab=Webcasts&seriesid=116&webcastid=4313
  5. http://www.microsoft.com/events/series/silverlight.aspx?tab=Webcasts&seriesid=116&webcastid=4314
  6. http://www.microsoft.com/events/series/silverlight.aspx?tab=Webcasts&seriesid=116&webcastid=4310

Complete WCF (.NET 2008+) webcast series


NOTE: On page 3 of the registration process you can print the slides.

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 01 of 15): Overview

Monday, July 02, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344312&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 02 of 15): Contracts

Monday, July 09, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344314&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 03 of 15): Contract Versioning

Wednesday, July 11, 2007 10:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344318&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 04 of 15): Exceptions and Faults

Friday, July 13, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344322&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 05 of 15): Bindings

Monday, July 23, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344330&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 06 of 15): Hosting

Wednesday, July 25, 2007 10:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344338&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 07 of 15): Messaging Patterns

Friday, August 10, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344342&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 08 of 15): Instancing Modes

Monday, August 13, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344344&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 09 of 15): Concurrency, Throughput, and Throttling

Wednesday, August 15, 2007 10:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344346&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 10 of 15): Security Fundamentals

Friday, August 24, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344348&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 11 of 15): Federated Security

Monday, August 27, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344351&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 12 of 15): Reliable Messaging

Wednesday, August 29, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344353&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 13 of 15): Transactions

Monday, September 03, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344355&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 14 of 15): Message Queuing

Wednesday, September 05, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344357&Culture=en-US

· MSDN Webcast: Windows Communication Foundation Top to Bottom (Part 15 of 15): Extensibility

Friday, September 07, 2007 9:00 AM Pacific Time (US & Canada)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032344359&Culture=en-US