Mastering TypeScript Book : Coming April 2015

With the amazing growth of the TypeScript language and compiler over the past two years or so, it is surprising that there are relatively few books on the subject.  The books that have appeared to date have done a decent job of explaining the language features, but hardly any have taken that extra step into working closely with a particular JavaScript framework. 

Learning TypeScript is one thing, but learning to write Backbone.js applications with TypeScript is another.  Let alone Angular.js, or ExtJs, or Marionette.js, or require.js, or node.js for that matter. 

Each framework has it’s own peculiarities, syntax, object-creation lifetimes, and compatible libraries.  Some frameworks are able to use TypeScript’s natural language features, like interfaces and inheritance.  Some libraries cannot. 

So we are left with a number of questions:

  • How do we choose between these frameworks ?
  • What are the differences between these frameworks when writing with TypeScript ? 
  • How do we unit test our code within these frameworks ? 
  • How do we implement object-oriented design patterns within these frameworks in TypeScript ? 
  • How do we build Single Page Applications for the web using TypeScript ?
  • How do we write declaration files?
  • How do we use generics ?
  • How do we use modules in node.js and require.js ?
  • Can we use dependency injection in TypeScript ?

Announcing “Mastering TypeScript”

Over the past few months I have been closely working with the PAKT publishing team on their next book in the TypeScript range, named “Mastering TypeScript”. I am pleased to announce that it is scheduled for publishing next month, April 2015.

You can read all about it here : https://www.packtpub.com/web-development/mastering-typescript

B03967_MockupCover_Normal

Have fun,

– blorkfish

SQL Server Database Source Code control with DBSourceTools 2

Scripting Existing databases

This is the second blog on using DBSourceTools to help source control SQL Server databases. You can find the first part here, where we discussed the benefits of source controlling databases, and went through a step-by-step process of starting up a new project, deploying a target database, and then including patch scripts as part of this deployment process.

This blog post is geared towards working with existing databases, and is a guide for projects where a database is already in place, or where you would like to use a TEST instance of your database as your source snapshot.

For this tutorial, we will use the AdventureWorks sample database from Microsoft. You can download a version of AdventureWorks for any flavour of SQL Server from their codeplex site:

http://msftdbprodsamples.codeplex.com/

New project

Fire up DBSourceTools, and select File | New | Project.

UsingDBSourceTools_screenshot_21

Give the project a name, and point it at a drive on disk. In the screenshot below, we have called our project local_AdventureWorks_0.0, and used d:\source\AdventureWorks as the base directory.

clip_image002

Click on the Database button, and fill in the details on the next form. In this example, we have called the database local_AdventureWorks_0.0, which is the same naming convention that we used in our previous tutorial. This naming convention is serverName_DatabaseName_Version. If you were pointing to a TEST instance of a database, then this servername should match the database server name where the database resides. Using a naming convention makes it clear where the source database originates from:

clip_image004

You can use the Databases button to bring up a list of current databases on the server, and simply pick the correct database. Remember to click on the Test Connection button before the OK button will be enabled. This Test Connection will warn you if you do not have the correct privileges on the server to perform the scripting step. Hit Ok, and then Yes to script the AdventureWorks database.

Once the process has finished, expand the local_AdventureWorks_0.0 icon in DBExplorer to see what database objects were scripted:

clip_image006

Now would be a good time to save the project : File | Save Project. Click on Yes to refresh the data from the database.

Adding a target database

The process of adding a target database is the same as we went through in the first article. Select Database | Add | New Deployment Target from the File Menu.

UsingDBSourceTools_screenshot_22

Fill in the required fields in the New Database Connection screen:

clip_image008

Again note that the Nick Name can be anything, but database NickNames must be UNIQUE across an entire DBSourceTools project. I have stuck to the same naming scheme as we used before for Target Databases: deploy_machineName_DatabasName_Version.

Again note that we can simply type the Database Name in the Database text box at the bottom of the screen – and it is not necessary to click the Databases button or the Test Connection button when creating a Target Database.

Click on OK, and then expand the nodes in DBExplorer to see that the Deployment Target has been created successfully:

clip_image010

Now would be a good time to Save the project. You can safely say No when asked to refresh the data from the database, as we have already done this step previously.

Setting scripting options.

DBSourceTools allows you to have fine-grained control over which database objects are scripted, and included in your deployment. You may just want to script a specific subset of tables, or you may want to script all database tables, but just include data for your “configuration” tables. In this tutorial, we will script all database objects, and all data.

To configure these options, right-click on the local_AdventureWorks_0.0 source database, and select Properties:

clip_image012

This will bring up the Source database properties screen.

clip_image014

This screen includes a set of checkboxes and buttons at the bottom of the page that control which database objects to include in the deployment process.

Including data.

Click on the Tables button to see which tables will be included in the deployment process:

clip_image016

This screen shows all tables within the source database, and has options to script the table, Script the data, or both. Click on the Data dropdown on the menu-bar, and select Script all.

clip_image018

This will enable the Script Data checkboxes for all tables within the database. Hit the Save button to store these scripting options to disk.

Reload

This table options screen has a reload button on the top left. If your source database has changed since the last time you set scripting options – in this case if the list of Tables has changed, then hit the reload button to reload this list from the source database. This ensure that the scripting options stored on disk, and used by the deployment step are in sync with the source database.

This process is similar for Procs, Views and Users, and each screen will allow fine-grained control over which database objects should be used in the deployment step.

Main options

The Source Database properties screen also has checkboxes marked for each of the database object types. Un-checking this “main” checkbox will exclude all objects for that category. Generally, you would want each of these checkboxes to be “on”.

Make sure that the Data checkbox is also “on”, because without this, all of the per-table settings will be ignored.

clip_image020

Once again, after making changes, Save the Project.

Writing Targets

The final step in configuring a source database is to write out our deployment targets. Navigate on the left-hand side to the Deployment Targets node, right-click, and select Write Targets. Select No to refresh data from database.

UsingDBSourceTools_screenshot_23

Each time you write targets, you are given the option of refreshing data from the database. If you would like to refresh the data at any time, then simply Write Targets, and select Yes for DBSourceTools to refresh the data from the source database.

The Write Targets step simply updates the Run_Create_Scripts.bat file based on the database objects found within the source database, and combines this with your scripting options . If you double-click on the Run_Create_Scripts.bat file, you will find that DBSourceTools simply runs sqlcmd to create tables, and then DBSourceDataLoader.exe to load data.

DBSourceDataLoader.exe is optimized to use SQLBulkCopy routines, and can load million record tables in a matter of seconds. The speed of the data loader is only constrained by the speed of your local development machine.

Deploy the target.

Once we have finished the Write Targets step, we can right-click on the deploy_local_AdventureWorks_0.1 Target database, and select Properties. This will bring up the Target Database properties screen:

clip_image022

Simply hit Deploy Target from the menu-bar of this screen.

clip_image024

The Deploy Target is a confirmation screen showing your target database, it’s server, and which batch file DBSourceTools will be using. This is your final chance to cancel a deployment if you hit the button inadvertently. Note that DBSourceTools does a destructive deployment, so it will completely remove the target database, recreate it, and re-deploy the database schema and structure.

Hitting OK here will start the Deployment process:

clip_image026

In the screenshot above, one of the stored procedures is generating an error, and so the output text is coloured in red.

Common deployment errors

One of the most common errors when scripting source databases is caused by having different directory names for the .mdf and .ldf files.  If your deployment results screen shows up a lot of red errors, then this is most probably the cause:

UsingDBSourceTools_screenshot_24

On most servers, database files and log files are written to a specific directory, and generally not on the C:\ drive.  DBSourceTools uses the SQL Server Management API to generate the CreateDB.sql script when a new source database is created – to ensure that all of the database settings are correct.

To fix this problem, simply navigate to the CreateDB.sql script within the DBExplorer, and double click on the icon.  This will open up a new text editor window showing the CreateDB.sql script. 

image

Notice on the second and fourth lines that the script uses a FILENAME parameter which has the physical path to both the .mdf and .ldf files.  This full path MUST exist on your local machine.  To fix these errors, either create this directory on your local machine, or modify it to the same directory as all of your other .mdf and .ldf files.

Viewing deployment results

Each deployment will create a Deploy_Results_<datetime>.txt file in the deployment target directory. You can always open this text file to view the deployment logs.

Adding the target as a source

As we saw in the first blog on DBSourceTools, you may want to include the newly deployed database as a source database in DBExplorer. This allows you to see both databases side by side, and allows for schema comparisons, data scripting options and much more.

To add the newly created database as a source database, simply click on Database | Add | New Source Database.

clip_image028

Give this new source database a NickName – which is generally machineName_DatabaseName_Version – so in this case local_AdventureWorks_0.1.

clip_image030

Remember that you can click on Databases to bring up a list of the available databases on the machine, and that you must click Test Connection before the Ok button will be enabled. Once you have hit Ok, select Yes to script the new database in DBSourceTools. You should now have both databases showing up in DBExplorer:

clip_image032

Checking the number of records

DBSourceTools has a number of built-in script utilities. To generate a sql script that will count the number of records in your database, right-click on the database, select new query and then the Count Rows option.

clip_image034

This will generate a script in the Queries directory that simply counts the number of rows in the database. When the query is shown, simply hit F5 to run it.

clip_image036

Summary

In this post, we have shown how to setup a new DBSourceTools project, and how to use an existing database as a source database.  We then viewed the source database scripting options, and set options to script data for all tables in the database.  We then created a deployment target for this source database, updated the Run_Create_Scripts.bat file by using the Write Targets process, and deployed the database to our local machine.

SQL Server Database Source Code control with DBSourceTools

No source control of databases

All too often I find that development teams will meticulously source control and code-review changes to their application source code, but this process is never applied to databases. In just about every TEST, UAT and even PROD database that I work with, changes to the database schema over time will leave broken stored procedures, broken views and even orphaned child records. By broken, I mean that the stored procedure or view is relying on fields that have been re-named or removed from the underlying tables. These procedures will never run successfully until they have been identified and fixed.

Problems with a common DEV database

Changing database schemas present even an even greater problem when development teams all use a common DEV database. If one developer applies a patch to a common DEV database as part of his check-in process, it can easily break everyone else’s environment and unit tests until each developer in turn updates their code to match the latest check-in. Even worse, if this check-in has introduced a bug, and causes further unit tests to fail on a build server, then the entire team is forced to scramble and try to fix the broken build, or even roll back the offending changeset, and restore the database to allow other developers to continue working. Using a common DEV database just does not make sense.

This common database approach can also lead to serious headaches when a source tree has been branched for either a feature or as part of a production deployment. If schema changes occur on the trunk branch, this can seriously impact anyone still using the older schema on a production branch. Again, using a common DEV database just does not make sense.

Use an isolated database instance per developer

The sensible way of dealing with schema changes is two-fold.

Firstly, each developer must have their own instance of the database as a “play pen” – so that they can make schema changes or massage data without interfering with another developers working environment.

Secondly, each developer must have a quick and easy way to restore a database to a point in time – before their changes were applied. In this way, scripts that developers are writing can be re-run against a “clean” version of the database, effectively testing these scripts before checking in.

Moreover, when a source code tree is branched, the database schema and data at that time should be branched along with the source code, so that any developer working on the branch will be able to recreate a “clean” database with the schema that relates to that branch.

Include your database in Source Control

The solution to these database dilemmas is surprisingly simple. Script out your entire database and store these scripts on disk, and check them in to source control. These scripts are then used to re-create each developer’s working database. As the scripts are in source control, you now have a fully source controlled and versioned database. This gives developers the same freedoms as normal application code, with the ability to branch, merge, check-in, checkout with freedom.

The only problem is that this scripting process should be as easy as compiling source code in an IDE. That is, checkout from source, open up your IDE, compile your code and check for errors. This is where DBSourceTools comes in.

DBSourceTools

DBSourceTools is designed to help developers source-control their databases. It will script an entire database to disk. Once these scripts are on disk, they can be used to re-create the database in it’s entirety from these files on disk. Adding these scripts to source control allows developers to re-create a database as it was at a point in time in source control.

This mechanism is very similar to Microsoft Database Projects – Microsoft themselves generate scripts within a database project, and deploy these scripts ( essentially compiling the database ) under the covers.

The following diagram shows the basic usage of DBSourceTools:

clip_image002

1. Connect to a source database. If using an established database, then this is usually the TEST or UAT instance.

2. DBSourceTools will script all objects within this database, and (optionally) it’s data to local disk.

3. This directory structure is then committed into Source Control.

4. DBSourceTools then loads these scripts from disk, and includes any patches in the patches directory to be run after the database is created.

5. DBSourceTools then deploys the database to a new Target Database ( usually on the local SQL instance), loads all data, and applies any patches.

a. Note that this is a two step process, DBSourceTools will DELETE the target database, and then completely RECREATE it from scratch.

6. These patches can then be added to source control.

Scenario 2 :

Once added to Source Control, a second developer can re-create this database without a connection to the original source database – as all required objects and data are part of the files on disk. The following diagram shows this process:

clip_image004

1. Update source tree on local disk from Source Control

2. This update will fetch all required scripts, data and patches from Source Control.

3. Run DBSourceTools to load the project.

4. Deploy the target database ( usually to the local SQL instance ).

Benefits of using DBSourceTools.

All developers use their own local instance of the database.

This means that two developers can make their own schema changes to an isolated instance of the database independently of each other, and not step on each other’s toes. Data Access Layer objects can be modified, and will only take effect once both the code and the database patches are committed to Source Control.

Databases are an instant in time.

Because all database objects are scripted to disk, and DBSourceTools DELETES and then RECREATES it’s Target database, a developer can quickly and effectively RESET the database to an instant in time – before any changes were made.

This instant in time is protected by Source Control, so going back to a specific changeset means that the database can be re-created at the instant in time that the changeset was created.

Patching

Using the very simple patching mechanism, developers can easily test whether their patches will be successful, in an iterative manner:

  • · Delete and re-create the database from source – to an instant in time before any changes were made.
  • · Write sql scripts, and test it against the database.
  • · Bundle these sql scripts into a patch, and include it in the Patches directory.
  • · Delete and re-create the database in one step, including the new patch.
  • · Ensure that the patch worked correctly.
  • Merging changes from other developers.

    When patches are added to source control from other developers, it is a simple matter of updating the patches directory with their changes, and re-deploying the database. DBSourceTools will run all of the patches in one go – thereby checking to see whether your patch works correctly with new patches committed by other developers.

    If your patch does not work correctly because of other patches, you can easily modify it, re-run it again and again before checking into source control.

    TEST, UAT and PROD patching.

    If you are unsure whether a patch will work correctly on TEST, UAT or PROD data, then it is a simple matter of getting a backup of any of these databases, scripting them to disk, and running through the patches as normal.

    Database compilation.

    By re-building a database from source files, errors in a database schema can be quickly found. As databases evolve over time, quite often stored procedures or views become un-usable because they are targeting fields or tables that have been removed or deleted. By re-building a database from scripts, these problems can be quickly identified and resolved.

    Data included

    DBSourceTools has powerful options to select which data should be scripted to disk. Select all tables, or just “configuration” tables when scripting a database to disk. DBSourceTools uses the SQLBulkCopy routines, and can load millions of records in a matter of seconds.

    By including data with your database, applications can be load tested with data volumes, or debugged against PROD or UAT data – all within the safety of a local SQL instance.

    Step-by-step Tutorial

    Let’s go through the process of using DBSourceTools in a step by step manner. We will start with a blank database, and then use the patching mechanism to create some tables and insert some data.

    Create a blank database

    To start off with, create a blank database on your local SQL instance – using SQL Management Studio, and call it TutorialDb_0.0.

    clip_image006

    Loading a Source Database

    Now fire up DBSourceTools, and select New Project. This project will need a name, which we have specified as Tutorial_Project_0.0, and it will need a base directory on disk – which we have chosen to be d:\source\TutorialDb:

    clip_image008

    Now click on the Database button. This will give you the following database screen:

    clip_image010

    A database Nick Name can be anything, but nick names must be UNIQUE across a project. I prefer to use the source server name as the prefix, then the database name, and then a version number. If you were scripting this database from a TEST environment, then I would name this database TEST_TutorialDB_0.0, or if from PROD, then PROD_TutorialDb_0.0.

    You can connect to any server, use Windows Auth or SQL Auth. Once you have selected an Authentication scheme, click on the Databases button to bring up a list of databases on that server, and select which one you require.

    Then click on the Test Connection button. The Test Connection button simply checks to see whether you have the correct permissions on the source database to allow for scripting. If not, you will need to modify the permissions on the source database to allow for db_owner privileges.

    Once the connection has succeeded, click on OK, then Ok again. Your source database connection is now setup. DBSourceTools will then prompt if you would like to load the database now. Click Yes.

    Once the load process is finished, you will see your source database on the DBExplorer panel on the left hand side of the screen.

    image

    It is a good idea to save this project at this stage, so Click on File | Save Project, and then click on No when DBSourceTools asks you if you would like to Refresh the data.

    Creating a Target Database

    Our database does not have anything in it as yet, but let’s create a target database so that we can start using the patching engine. Click on Database | Add | New Deployment Target

    clip_image014

    This brings up a similar database connection dialog as follows:

    clip_image016

    The only required fields on this screen are Nick Name, and Database. Note the naming convention for the Nick Name. I always prefix a deployment Target database with the word deploy, followed by the servername, followed by the database name, and an incremented version. Again, these nick-names must be UNIQUE across a DBSourceTools project.

    It is not necessary to click on the Databases or Test Connection buttons when creating a Target Database.

    Clicking OK here will create a deployment target database that is under the tree structure of your original source database. Expand the tree until you can see this new database.

    image

    Again, remember to Save the project now.

    Deploying the Target Database

    To deploy our Source database to our Target database, right-click on the Target database, and select Properties:

    clip_image020

    This will bring up the Target database properties in the panel on the right hand side:

    clip_image022

    Click on the Deploy Target button. This button will open a new window, and execute the Run_Create_Scripts.bat file which is on disk, and is a child of the deploy_local_TutorialDb_0.1 directory:

    clip_image024

    This new database (local_TutorialDb_0.1) should now be created on your local SQL server.

    Creating Patches

    Now that we have deployed our source database to the target database, we can start creating patches. These patches will be attached to our Source database – under the patches directory of the deployment target. When writing and creating patches, I always find it handy to have both Source and Target database available within the same project.

    Add your target database as a source.

    Click on the Database | Add | New Source database to create a new Source database within the same project:

    clip_image026

    This source database will actually be the Target ( local_TutorialDb_0.1) database that we deployed earlier. Use the New Database Connection dialogue to specify this as the source database:

    clip_image028

    Note that I have used the same naming convention for source database Nick Name as earlier : machine name, database name, version number. We can also click on the Databases button to select the local_TutorialDb_0.1 database from the available list, and then we need to click on Test Connection before the OK button will become available.

    Once you have hit OK, Click on Yes to load the database into DBSourceTools.

    clip_image030

    This will load the new database as a source database, and include it in the DBExplorer:

    clip_image032

    Remember that our source database is the one at the top, and has a version number of 0.0. The database that we are deploying to is on the bottom, and has a version number of 0.1.

    Creating and Scripting Tables.

    This new source database (local_TutorialDb_0.1) is now a “Playground” instance that we can use to create tables, insert data, or generally design our new database in. Once we have made changes to this database, we will need to ensure that we create patches from these new tables, views, etc, and include them in the Patches Directory of our deployment target.

    Deploying from a source database ( 0.0 ) to a target database ( 0.1 ) will completely delete the target database before re-creating it and running patches. So remember that the “Playground” database can be wiped clean at any stage, and you can start from scratch if you make any unrecoverable mistakes.

    You can create a new table using SQL Management Studio , or simply by running sql scripts, or in whatever way you like.

    Once created, though, make sure that you use DBSourceTools to script your database tables. When scripting tables from SQL Management Studio, the generated scripts DO NOT include any indexes that you may have created on the table, or in fact any related objects. You will need to generate scripts for your indexes in a separate step, and then combine these scripts to re-create your table successfully. This is obviously error-prone and time-consuming.

    DBSourceTools will script tables, indexes and any related objects in one step.

    As an example of this, let’s create a new table using DBSourceTools.

    Create a Table

    Right-click on the 0.1 version of your database, then select New Query, New Table to generate a sample script for creating a new table:

    clip_image034

    The resulting script has the basics of the sql that you will require in order to create a new table. Note that most of the script is just a list of commented SQL datatypes, inserted as handy reference should you want to refresh your memory on how to use different datatypes.

    At the bottom of the script is line to add a constraint for a primary key – don’t forget to fill in the blanks here – all tables should have a primary key !

    Modify the script to look something like this:

    CREATE TABLE [dbo].[MyFirstTable](

    [Id] [bigint] identity(1,1) NOT NULL,

    [Name] [nvarchar](25) NOT NULL,

    [Description] [nvarchar](100) NULL,

    CONSTRAINT [PK_MyFirstTable] PRIMARY KEY ( [ID] ASC )

    ) ON [PRIMARY]

    GO

    Executing queries

    To execute the current SQL query against the current database, simply it F5.

    clip_image036

    Saved Queries

    Notice the DBExplorer on the right-hand side. DBSourceTools has created a new Queries directory under local_TutorialDb_0.1, and saved your create table script as Query.sql. This Queries directory will be used for any scripts that DBSourceTools creates – so is a quick and handy way of going back to older scripts.

    Any Query under the queries directory will be run against it’s parent database by default, so is a handy “ScratchPad” area to use when working with and running scripts.

    Reloading from Database

    All well and good so far, but our new table has not appeared in the DBExplorer tree view as yet. This is because DBSourceTools by default loads databases from Disk, not the database. What we will need to do now is to refresh what is on disk with what is actually in the database. To do this, right-click on the source database, and select Load from Database:

    clip_image038

    This will refresh the database structure from the updated database.

    Once this is complete, you will see the MyFirstTable appear under the Tables node of the database.

    Clicking on the expand tree icons, you will notice that DBSourceTools adds some handy features when working with database objects.

    clip_image040

    Firstly, double-clicking on the table name will bring up a source code view of the table definition. Secondly, the table has a Data icon. Double clicking on this data icon will open up a new window, and immediately show all data in the table. In SQL Management Studio, this is a two-step process – you need to right-click on the table and then click select top(1000) to have a quick view of your data.

    Thirdly, there is a Fields icon, and expanding this will show a list of field names and their data types.

    To view the SQL script definition of the table, simply double click on the table name:

    clip_image042

    Inserting Data

    You can insert data into a table in whatever manner you choose, but DBSourceTools can also help generate a sample script. Right-click on the table, and select Script Insert. The generated script will provide enough information to be able to simply fill in the blanks:

    clip_image044

    We can modify this script pretty easily to insert two records into MyFirstTable:

    insert into MyFirstTable(

    /*[Id] bigint primary key (identity) */

    [Name] /* NOT NULL */

    ,[Description]

    )

    values (

    /*[Id] bigint primary key (identity) */

    ‘First name’ /*Name NOT NULL nvarchar */

    ,’First description’ /*Description nvarchar */

    )

    insert into MyFirstTable(

    /*[Id] bigint primary key (identity) */

    [Name] /* NOT NULL */

    ,[Description]

    )

    values (

    /*[Id] bigint primary key (identity) */

    ‘Second name’ /*Name NOT NULL nvarchar */

    ,’Second description’ /*Description nvarchar */

    )

    Now hit F5 to run the script.

    Viewing and Scripting Data

    Double click on the data icon in the DBExplorer under the MyFirstTable icon:

    clip_image046

    This will bring up the data view, showing all records currently in the table. From here we can easily create an insert script to include this data in a patch. Simply click on the Script Data button in the Data Window. The generated script will automatically set identity insert on and then off to preserve our identity seed on the Id column, and also set a nocount on for running the script:

    SET NOCOUNT ON

    SET IDENTITY_INSERT [MyFirstTable] ON

    insert into [MyFirstTable] ( [Id],[Name],[Description] ) values ( 1,’First name’,’First description’ )

    insert into [MyFirstTable] ( [Id],[Name],[Description] ) values ( 2,’Second name’,’Second description’ )

    SET IDENTITY_INSERT [MyFirstTable] OFF

    SET NOCOUNT OFF

    This script and the table definition are now ready for patching.

    Creating patches

    To include our new database table definition, and it’s data in a deployment step, we will now create two patches under the patches directory of our original source database. Use the DBExplorer window to expand the tree as follows: local_TutorialDb_0.0 > deployment targets > deploy_local_TutorialDB_0.0 > Patches.

    Right-click on the patches icon, and select new patch:

    clip_image048

    Fill in the patch name. Note that patches are loaded alphabetically, so make sure that you number your patches. We will create a Patch_001_table_MyFirstTable as follows:

    clip_image050

    Create a second patch using the same process, and call this patch Patch_002_data_MyFirstTable.

    Double clicking on a patch will open up the script in an editor window. So edit the Patch_001_table_MyFirstTable, and copy the definition of the MyFirstTable into it. Remember that double-clicking on any table name will bring up the database script used for the table – so find the table MyFirstTable, double-click on it, and copy the create script. Paste it into Patch_001, and save the file.

    Edit the Patch_002_data_MyFirstTable by simply double-clicking on it. Copy and paste the insert script created into the previous step into this patch, and save the file.

    Including the patches in the deployment script

    DBSourceTools uses a simple batch file to run the deployment scripts, load tables and data, and run the patches. The file that it uses is called Run_Create_Scripts.bat, and lives as the first file underneath the deployment target database. Double-click on this file to see what it contains:

    set DB_BASE_DIR=D:\source\TutorialDb\local_TutorialDb_0.0\

    set BASE_BIN_DIR=C:\Program Files (x86)\DBSourceTools\

    set PROJECT_BASE_DIR=D:\source\TutorialDb\

    set PATCH_DIR=D:\source\TutorialDb\local_TutorialDb_0.0\DeploymentTargets\deploy_local_TutorialDb_0.1\

    rem

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_DropDB.sql

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_CreateDB.sql

    As we can see, this file is just setting some global variables, and then running a DropDB and CreateDB script. We now need to update this script to include our new patches.

    This process is called Writing Targets. Using the DBExplorer, right-click on the Deployment Targets node of the source database, and click Write Targets.

    clip_image052

    This process also includes the option of refreshing data from the source database. At this time our source database is blank, so we can safely say No here.

    clip_image054

    Once this process has finished, open up the Run_Create_Scripts.bat file again. If you already have this file open, you may be viewing the in-memory version of this file, so it is always safer to close the file first, and then re-open it by double-clicking on the file.

    Note how DBSourceTools has added our two patches at the bottom of the script:

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_DropDB.sql

    sqlcmd -S (local) -E -i %DB_BASE_DIR%DeploymentTargets\deploy_local_TutorialDb_0.1\local_TutorialDb_0.1_CreateDB.sql

    sqlcmd -f 850 -S (local) -d local_TutorialDb_0.1 -E -i "%PATCH_DIR%Patches\Patch_001_table_MyFirstTable.sql"

    sqlcmd -f 850 -S (local) -d local_TutorialDb_0.1 -E -i "%PATCH_DIR%Patches\Patch_002_data_MyFirstTable.sql"

    This Run_Create_Scripts.bat file is at the heart of the deployment process. Any time that we deploy a target database, this script will be run. Make sure that whenever you add patches or change options on the source database, you remember to do the Write Targets step to update this file.

    Killing databases.

    To re-deploy our database including our new patches, simply right-click on the target database and select properties. This will bring up the database properties screen, where you can hit the Deploy button to start the deployment process.

    If your script hangs or gives errors during the database drop step, it may be that there are still connections open to the target database, which will interfere with the drop command. To close all existing connections and drop the database in one step, simply click on the Kill database button.

    Let’s use this kill step, and then re-deploy the database:

    Right-click on the deployment database named deploy_local_TutorialDb_0.1, and select Properties.

    clip_image056

    This will bring up the Properties window, with the Deploy Target and Kill Database buttons on the menu bar:

    clip_image058

    Go ahead and click on the Kill Database button to close all existing connections to the local_TutorialDB_0.1 database, and drop it in one step.

    clip_image060

    clip_image062

    We can now hit the Deploy Target button to run the Run_Create_Scripts.bat file and re-create the database:

    clip_image064

    Check to see that there is no red text in the output window – if there is, then something has gone wrong with the deployment.

    Checking output results

    DBSourceTools keeps a copy of each deployment in a text file in the same directory as Run_Create_Scripts.bat If we fire up an explorer and navigate to this directory, we will find a DeployResults_ file for each deployment. The contents of this file are an exact copy of the output window above. If you encounter errors, then have a look at these files as a record of each of your previous deployment runs.

    Verify the Target Database.

    Once we have completed the deployment step, we can re-load our new database from the database, just to ensure that we still have all of our tables and data loaded correctly. To do this, right-click on the local_TutorialDB_0.1 database in the DBExplorer view, and select load from database.

    clip_image066

    Expand the nodes of this database to ensure that it contains the MyFirstTable, and then double click on the Data icon to check that this table has data.

    clip_image068

    Add files to Source Control

    The last step in this process is to add files to your Source Control engine.

    Sharing your database

    DBSourceTools uses full path names for project files and script names. Unfortunately, this means that all developers must have the same path names on their machines in order to share databases. This can be easily accomplished by substituting the same drive letter on each developer machine. Drive substitution is different to mapping a network path, and is accomplished using the subst command in a DOS prompt.

    Lets assume that developer 1 stores his source code in the following location:

    C:\users\dev1\source\

    And developer 2 stores his source code at:

    C:\source

    If the DBSourceTools base directory has been set at d:\source\TutorialDB, then we will need both developers to have the same d:\source\TutorialDB directory structure.

    This can be easily accomplished by using the substitute command to substitute a virtual d: drive to c:\users\dev1. Run the following command in a DOS prompt
    subst d: c:\users\source

    If substitutions are necessary for your developer machine, then you can easily create a quick batch file to do this substitution, and run it on startup.

    The same substitution on developer 2’s machine would simply be

    Subst d: c:\

    This ensures that the directory used by DBSourceTools is the same across both machines: d:\source\TutorialDb.

    Summary

    In this tutorial, we have used DBSourceTools to start with a blank database, deploy it to a target database, modify the target database, and then reverse-engineer our changes back into patch scripts.

    At the end of this process, we can simply hand over the patches to a DBA who will be able to re-create our shiny new database on TEST, UAT and PROD boxes. DBA’s generally create databases themselves, as the disk space requirements and subtle tweaks needed in each environment are slightly different, and they would know best. So DBSourceTools can be used to help write these scripts.

    TypeScript: Using Backbone.Marionette and REST WebAPI (Part 2)

    This is Part 2 of an article that aims to walk the reader through setting up a Backbone.Marionette SPA application with Visual Studio 2013, and in particular, write Marionette apps using TypeScript.

    As always, the full source code for this article can be found on github: blorkfish/typescript-marionette

    Part 1 can be found here, and covered the the following:

    • Setting up a Visual Studio 2013 project.
    • Install required nuGet packages.
    • Creating the ASP.NET HomeController and View.
    • Including required javascript libraries in our Index.cshtml.
    • Creating a Marionette.Application.
    • Adding a Marionette Region
    • Referencing the Region in the Application
    • Creating a Marionette.View
    • Using bootstrap to create a clickable NavBar Button
    • Using Backbone Models to drive NavBar buttons.
    • Creating a Backbone.Collection to hold multiple Models
    • Creating a Marionette.CompositeView to render collections
    • Rendering Model Properties in templates
    • Using Marionette Events.
      In this part of the article, we will cover the following:
    • Creating an ASP.NET WebAPI Data Controller.
    • Unit testing the WebAPI Data Controller in C#
    • Modifying the WebAPI Data Controller to return a collection of nested C# POCO objects.
    • Defining TypeScript Backbone.Model classes to match our nested class structure.
    • Writing Jasmine unit tests for our Backbone Collection.
    • Creating a Marionette.CompositeView to render data in a bootstrap table.
    • Using a Marionette.CompositeView as an ItemView
    • Rendering nested Backbone.Collections
    • Using CompositeView properties to generate html.
    • Using bootstrap styles in a Marionette.CompositeView
      When we are complete with this tutorial, we will have generated a nested Json structure as follows: UserList –> User –> RoundScores –> RoundScore.
      We will then render it as follows:

    image

        So let’s get started.

        Creating an ASP.NET WebAPI Data Controller.

        Creating an ASP.NET WebAPI Data Controller is just as simple as creating a normal MVC Controller.  All we need to do is to derive our class from ApiController instead of Controller, and then specify what the url will be when calling this controller. 

        The latter part is is accomplished by adding two attributes to a method call on our ApiController. These are the [Route] attribute – to specify the url, and a System.Web.Http attribute to specify whether this is a REST get, post or delete.
        Go ahead and create a file under the /Controllers directory named HomeDataController.
        Derive this class from ApiController ( in the System.Web.Http namespace ), and add a Route attribute, as well as an HttpGet attribute as in the code below :
        using System.Collections.Generic;
        using System.Net;
        using System.Net.Http;
        using System.Web.Http;
        
        namespace typescript_marionette.Controllers
        {
            public class HomeDataController : ApiController
            {
                [Route("api/dataservices")]
                [HttpGet]
                public HttpResponseMessage GetDataTable()
                {
                    return Request.CreateResponse<IEnumerable<string>>(HttpStatusCode.OK, GetData());
                }
        
                public List<string> GetData()
                {
                    return new List<string> {"test1", "test2"};
                }
        
            }
        }
        There are a couple of things to note about the code above.
        Firstly, the [Route(“api/dataservices”)] attribute.  This defines the route to our DataController function.  So firing up a web-browser and pointing it to /api/dataservices will hit this DataController. 
        Secondly the [HttpGet] attribute.  This attribute defines the method signature as allowing REST GETs.
        Thirdly, the return type of HttpResponseMessage – and the return syntax : return Request.CreateResponse <type>.  These two signatures will return data JSON when json is requested, or XML when xml is requested.
        Fourth, the HttpStatusCode.OK will return a success callback to any JavaScript calling code.  Interestingly enough, throwing an Exception anywhere in the call stack will return an error callback to any JavaScript calling code.  This is all built-in when using classes deriving from ApiController.  Later on, we will create a Jasmine unit test to test our error callback – to make sure that we are handling errors correctly.
        Next, note that I have created a method public List<string> GetData(), where I could have simply created this List<string> within the GetDataTable() method directly.  Splitting these methods will help us with unit-testing later on in the piece.  One C# xUnit test will target the GetData() function, and then we will use Jasmine to unit-test the returned Json response in GetDataTable().
        Lastly, the IEnumerable<type> syntax.  In the code sample above, we are simply returning a string type.  But further down the line, we will define [Serializable] POCOs to return nested Json, such that returning IEnumerable<MyType> will return full MyType classes – as well as any child classes or collections defined – automatically transformed into Json or Xml.  Cool, huh ?
        Unfortunately, firing up web-browser, and typing in the url /api/dataservices at this stage will not work.  Using IE, you will get the very helpful error message “The webpage cannot be found”, with a 404 status:

      image

      Using Chrome, the error message is slightly more helpful: No HTTP resource was found that matches the request URI ‘http://localhost:65147/api/dataservices&#8217;.

      This is down to one missing line of code.  Navigate to the App_Start directory, and double-click on the WebApiConfig.cs file.  Modify the code to call config.MapHttpAttributeRoutes() as shown below.  This line of code is called on app startup, and simply traverses the code to find our [Http] and [Route] attributes – and then adds them to the Route Table.

      namespace typescript_marionette
      {
          public static class WebApiConfig
          {
              public static void Register(HttpConfiguration config)
              {
                  config.MapHttpAttributeRoutes();
      
                  config.Routes.MapHttpRoute(
                      name: "DefaultApi",
                      routeTemplate: "api/{controller}/{id}",
                      defaults: new { id = RouteParameter.Optional }
                  );
              }
          }
      }
      

      Firing up Chrome at this stage, and navigating to /api/dataservices will now generate the expected result : a string array with two entries : test1 and test2 :

      image

      I have always found that Chrome or Firefox is far easier to work with when manually testing DataControllers.  IE for some reason does not have a default rendering engine for pure data.  It always tries to download the json response – and then you need to select a program to view this data – which is terribly annoying. 

      image

      image

      So stick to any other browser except IE when manually testing DataControllers.

      Unit-testing the WebAPI Data Controller in C#

      Right, so what would a good Data Controller be without unit tests ? 

      As far as I understand, Hamlet D’Arcy is credited with the saying “[Without unit tests] You’re not refactoring, you’re just changing shit.”

      So best we create some unit tests, before we start changing shit…

      Personally, I have been using xUnit as a testing framework for some time now. 

      Add a new project to your solution called typescript-marionette-xunit.  Make sure the project type is a Class Library:

      image

      Delete the Class1.cs file that is automagically created for you.

      Now let’s add the nuGet packages for xUnit.  To to Tools | Library Package Manager | Package Manager Console.

      At the top of the screen, next to the Package source dropdown, there is a Default project dropdown.  Make sure that you have selected the typescript-marionette-xunit project here :

      skitch_screenshot_1

      Now install xunit as follows:

      Install-Package xunit –Version 1.9.2

      Install-Package xunit.extensions –Version 1.9.2

      Next, add a reference to typescript-marionette project: Right-click on Refrences, Add Reference – then choose the typescript-marionette project under Solution Projects:

      image

      Now add a Controllers directory, and then a HomeDataControllerTests.cs class as follows:

      namespace typescript_marionette_xunit.Controllers
      {
          public class HomeDataControllerTests
          {
              [Fact]
              public void GetData_Returns_ListOfStrings()
              {
                  HomeDataController controller = new HomeDataController();
                  Assert.Equal(new List<string> {"test", "test"}, controller.GetData());
              }
          }
      }

      Running this unit-test will produce the following error:

      Assert.Equal() FailurePosition:

      First difference is at position 0

      Expected: List<String> { "test", "test" }

      Actual: List<String> { "test1", "test2" }

      I firmly believe that you should always write a test that fails first, before modifying your code to make the test pass.  Obviously this is as simple as changing the expected List<string> to be { “test1”, “test2” }.

      Modifying the WebAPI Data Controller to return a collection of nested C# POCO objects.

      It’s time now to get the DataController to return some real data.  For the purposes of this article, lets assume we are wanting to return a list of users, and how they scored per round.  The classes involved are as per the following class diagram:

      skitch_screenshot_2

      Create a ResultsModels.cs file under the /Models directory, as follows:

      namespace typescript_marionette.Models
      {
          [Serializable]
          public class UserModel
          {
              public UserModel()
              {
                  RoundScores = new List<RoundScore>();
              }
              public string UserName;
              public string RealName;
              public List<RoundScore> RoundScores;
          }
      
          [Serializable]
          public class RoundScore
          {
              public int RoundNumber;
              public int TotalPoints;
          }
      }

      Now, let’s modify the HomeDataController to return a list of these models.  At the same time, we may as well define a new url (api/results) to return these results:

              [Route("api/results")]
              [HttpGet]
              public HttpResponseMessage GetUserResults()
              {
                  return Request.CreateResponse<IEnumerable<UserModel>>
                      (HttpStatusCode.OK, GetUserResultModels());
              }
      
              public List<UserModel> GetUserResultModels()
              {
                  return new List<UserModel>
                  {
                      new UserModel { UserName = "testUser_1", RealName = "Test User No 1",
                          RoundScores =  new List<RoundScore>
                      {
                            new RoundScore { RoundNumber = 1, TotalPoints = 2 }
                          , new RoundScore { RoundNumber = 2, TotalPoints = 3 }
                          , new RoundScore { RoundNumber = 3, TotalPoints = 2 }
                          , new RoundScore { RoundNumber = 4, TotalPoints = 5 }
                      } },
                      new UserModel { UserName = "testUser_2", RealName = "Test User No 2", 
                          RoundScores =  new List<RoundScore>
                      {
                            new RoundScore { RoundNumber = 1, TotalPoints = 5 }
                          , new RoundScore { RoundNumber = 2, TotalPoints = 6 }
                          , new RoundScore { RoundNumber = 3, TotalPoints = 2 }
                          , new RoundScore { RoundNumber = 4, TotalPoints = 1 }
                      }  },
                      new UserModel { UserName = "testUser_3", RealName = "Test User No 3", 
                          RoundScores =  new List<RoundScore>
                      {
                            new RoundScore { RoundNumber = 1, TotalPoints = 3 }
                          , new RoundScore { RoundNumber = 2, TotalPoints = 5 }
                          , new RoundScore { RoundNumber = 3, TotalPoints = 6 }
                          , new RoundScore { RoundNumber = 4, TotalPoints = 6 }
                      }  }
                  };
              }
      

      Now let’s fire up our application, and browse to /api/results ( using Chrome ) to see what we get:

      image

      Defining TypeScript Backbone.Model classes to match our nested class structure.

      At this point, we will need some Backbone.Model classes and a Backbone.Collection to retrieve data from our /api/results url.  Backbone.Collections have a very simple method of retrieving data from REST services – simply specify the url property.  As an example, if we were to modify the NavBarButtonCollection (that we created in Part 1) to load data from REST services, we would do the following:

      class NavBarButtonCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = NavBarButtonModel;
              this.url = "/api/navbars";
          }
      }

      So let’s create some Backbone.Models to emulate the C# POCO class structure for UserModel and RoundScore that we built in C#.  The trick here is to use TypeScript interfaces to define the relationships.  In the /tscode/models directory, create a new TypeScript file named UserResultModels.ts.   For simplicity, I have included the interfaces and Models in the same TypeScript file.  The interface definitions for the returned Json objects are shown below.  Note how we are defining the nested properties as arrays [ ]. Also, the property names must match exactly the C# POCO property names.

      interface IRoundScore {
          RoundNumber?: number;
          TotalPoints?: number;
      }
      
      interface IUserModel {
          UserName?: string;
          RealName?: string;
          RoundScores?: IRoundScore [];
      }

      Next, we create the Backbone.Model class based on these interfaces.  Note that the ES5 property getters and setters match the signatures of the interfaces.

      class RoundScore extends Backbone.Model implements IRoundScore {
          get RoundNumber(): number { return this.get('RoundNumber'); }
          set RoundNumber(value: number) { this.set('RoundNumber', value); }
      
          get TotalPoints(): number { return this.get('TotalPoints'); }
          set TotalPoints(value: number) { this.set('TotalPoints', value); }
      
          constructor(input: IRoundScore) {
              super();
              for (var key in input) {
                  if (key) { this[key] = input[key]; }
      
              }
          }
      }
      
      class UserModel extends Backbone.Model implements IUserModel {
          get UserName(): string { return this.get('UserName'); }
          set UserName(value: string) { this.set('UserName', value); }
      
          get RealName(): string { return this.get('RealName'); }
          set RealName(value: string) { this.set('RealName', value); }
      
          get RoundScores(): IRoundScore[] { return this.get('RoundScores'); }
          set RoundScores(value: IRoundScore[]) { this.set('RoundScores', value); }
      
          constructor(input: IRoundScore) {
              super();
              for (var key in input) {
                  if (key) { this[key] = input[key]; }
      
              }
          }
      }

      Now to create our collection: ( again in /tscode/models/UserResultModels.ts )

      class UserResultCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = UserModel;
              this.url = "/api/results";
          }
      }

      As a quick test of this collection, lets load it in our MarionetteApp as a variable.  Note that we have specified a property to the fetch({ async: false }) function of the UserResultCollection .  This property will halt execution of the calling thread (not asynchronous) until the collection is loaded.  In general, it is better practise NOT to specify this parameter, unless absolutely neccesary.  the initializeAfter() function in MarionetteApp.ts is shown below:

          initializeAfter() {
              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
      
              var navBarView = new NavBarCollectionView({ collection: navBarButtonCollection });
      
              navBarView.on("itemview:navbar:clicked", this.navBarButtonClicked);
      
              this.navbarRegion.show(navBarView);
      
              var resultsCollection = new UserResultCollection();
              resultsCollection.fetch({ async: false });
          }

      Before trying to debug this code, don’t forget to include the UserResultModel.js file in our Views/Home/Index.cshtml file:

          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
          
          <script language="javascript" type="text/javascript" src="../../tscode/models/UserResultModels.js"></script>

      Setting a breakpoint after the collection is loaded, and checking the resulting resultsCollection variable in Visual Studio should show that we have successfully loaded the json returned from our REST ApiController:

      skitch_screenshot_3

      But debugging and manually verifying that our collection is loaded correctly is just what it is : manual.  And manual is time-consuming, error-prone and just a pure pain.  So let’s write a unit-test to verify that our model is loading correctly from the C# DataController.

      Unit testing the Backbone Collection with jasmine

      To setup a unit test for our UserResultCollection, we will create a web-page named SpecRunner.html.  This is a simple web-page that just includes all of our required .js files, and then calls jasmine.execute(). 

      Firstly, create a /tscode/test directory – then add an html page to this directory named SpecRunner.html.  This file is very similar to /Views/Home/Index.cshtml, and should also include the meta tag http-equiv for IE.  Simply copy the <head> section from Index.cshtml.  As well as including all of our source .js files, we will also need to include /scripts/jasmine.js, , /scripts/jasmine-html.js, and also the /css/jasmine.css file as follows:

      <!DOCTYPE html>
      
      <html>
      <head>
          <meta http-equiv="X-UA-Compatible" content="IE=edge">
          <title>Index</title>
      
          <link rel="stylesheet" href="../../Content/bootstrap.css" type="text/css" />
          <link rel="stylesheet" href="../../Content/app.css" type="text/css" />
          
          <script language="javascript" type="text/javascript" src="../../Scripts/jasmine.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/jasmine-html.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/jasmine-jquery.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../Scripts/jquery-1.9.1.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/json2.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/underscore.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/backbone.marionette.js"></script>
          <script language="javascript" type="text/javascript" src="../../Scripts/bootstrap.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/MarionetteApp.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarItemView.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/views/NavBarCollectionView.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/models/UserResultModels.js"></script>
      </head>
      <body>
          <script type="text/javascript">
              var jasmineEnv = jasmine.getEnv();
              jasmineEnv.addReporter(new jasmine.HtmlReporter());
              jasmineEnv.execute();
          </script>
      </body>
      </html>

      Note that the jasmine-jquery.js file is not included via the nuGet package for jasmine-js.  You will need to download the file from here [ jasmine-jquery-1.3.1.js  ] – and then save it into your /Scripts directory.

      To run this file, simply right-click on it, and select the menu option Set As Start Page, and then hit F5 to debug.  But don’t do it yet – if you do – you will end up with a blank page.  Why ? Well, we havn’t written any jasmine tests yet.

      Writing Jasmine unit tests for our Backbone Collection.

      In the /tscode/test directory, create a directory named models.  Now create a TypeScript file for our UserResultCollection tests named UserResultCollectionTests.ts. , and include the reference paths for our definition files at the top.

      Jasmine tests all fall within a describe(‘ test suite name ’, () => { .. tests go here … }) block – which defines the test suite name.  Within this describe function, each test is defined with the syntax it(‘ test description ‘ , () => {  test goes here… }) function as follows:

      /// <reference path="../../../Scripts/typings/jquery/jquery.d.ts"/>
      /// <reference path="../../../Scripts/typings/underscore/underscore.d.ts"/>
      /// <reference path="../../../Scripts/typings/backbone/backbone.d.ts"/>
      /// <reference path="../../../Scripts/typings/marionette/marionette.d.ts"/> 
      /// <reference path="../../../Scripts/typings/jasmine/jasmine.d.ts"/> 
      /// <reference path="../../../Scripts/typings/jasmine-jquery/jasmine-jquery.d.ts"/> 
      
      describe('/tscode/test/models/UserResultCollectionTests.ts ', () => {
      
          it('should fail', () => {
              expect('undefined').toBe('defined');
          });
      
      });

      Now, just include the generated .js file in your SpecRunner.html file:

          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonCollection.js"></script>
          <script language="javascript" type="text/javascript" src="../../tscode/models/NavBarButtonModel.js"></script>
      
          <script language="javascript" type="text/javascript" src="../../tscode/models/UserResultModels.js"></script>
          <script language="javascript" type="text/javascript" src="./models/UserResultCollectionTests.js"></script>

      Running the app now ( F5 ) should show the results of the jasmine tests:

      image

      Ok, now that we have Jasmine up and running, lets write some unit tests for our UserResultCollection.  Jasmine has two functions – beforeEach() and afterEach() that will run before each test, and after each test to perform initialization.  In beforeEach(), we setup our collection, and then we can re-use it in each of our tests as follows:

      describe('/tscode/test/models/UserResultCollectionTests.ts ', () => {
      
          var userResultCollection: UserResultCollection;
      
          beforeEach(() => {
              userResultCollection = new UserResultCollection();
              userResultCollection.fetch({ async: false });
          });
      
          it('should return 3 records from HomeDataController', () => {
              expect(userResultCollection.length).toBe(3);
          });
      
      });

      To find a specific instance in this collection, we can use the underscore.js functions where() and findWhere()where() will return a collection where all elements match the criteria, and findWhere() will return a single Model in our collection that matches the selection criteria:

          it('should find 1 UserModel with Name testUser_1', () => {
              var userModels = userResultCollection.where({ UserName: 'testUser_1' });
              expect(userModels.length).toBe(1);
      
          });
      
          it('should return a UserModel with Name testUser_1', () => {
              var userModel = userResultCollection.findWhere({ UserName: 'testUser_1' });
              expect(userModel).toBeDefined();
          });

      Jasmine tests can also be nested.  This means that we can describe( ) a set of tests that will use the parent’s beforeEach() and afterEach() functions to run the tests.   In this describe() block, we can also create beforeEach() functions.  This provides us with a handy way of testing a single model within the userResultCollection:

          it('should return a UserModel with Name testUser_1', () => {
              var userModel = userResultCollection.findWhere({ UserName: 'testUser_1' });
              expect(userModel).toBeDefined();
          });
      
          // this describe block is nested inside our main describe block
          describe(' UserModel tests ', () => {
              var userModel: UserModel;
              beforeEach(() => {
                  // the userResultCollection is setup in the parent beforeEach() function
                  userModel = <UserModel> userResultCollection.findWhere({ UserName: 'testUser_1' });
              });
      
              it('should set UserName property', () => {
                  expect(userModel.UserName).toBe('testUser_1');
              });
      
              // check that we are getting an array for our nested JSON objects
              it('should set RoundScores property', () => {
                  expect(userModel.RoundScores.length).toBe(4);
              });
          });

      Now lets use the same technique to get the third RoundScore model from the array of RoundScores for this UserModel;

              // check that we are getting an array for our nested JSON objects
              it('should set RoundScores property', () => {
                  expect(userModel.RoundScores.length).toBe(4);
              });
      
              // nested describe block re-uses the userModel set in parent beforeEach()
              describe('RoundScore tests', () => {
                  var roundScore: RoundScore;
                  beforeEach(() => {
                      roundScore = <RoundScore> userModel.RoundScores[2]; // get the third RoundScore
                  });
      
                  it('should have RoundNumber set to 3', () => {
                      expect(roundScore.RoundNumber).toBe(3);
                  });
                  it('should have TotalPoints set to 2', () => {
                      expect(roundScore.TotalPoints).toBe(2);
                  });
              });

      image

      Creating a Marionette.CompositeView to render the Backbone Collection in a table.

      So we are now confident that our Backbone Collection is working correctly.  Next step is to create a Marionette.CompositeView to render this collection in a table.  In the /tscode/views directory, create a new TypeScript file named UserResultViews.ts.  Once again, extend from Marionette.CompositeView, and set the options.template property:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              super(options);
          }
      }

      Next, update the Index.cshtml to provide an html snippet that matches the options.template property (#userResultsViewTemplate) above.  While we are at it, lets also create a new Marionette Region for our UserResultView to render into.  Modify the Index.cshtml as follows.  Don’t forget to include the new JavaScript file in the <head> element.

          <script language="javascript" type="text/javascript" src="../../tscode/views/UserResultViews.js"></script>
              <div class="container"> @* wrap the row with a container *@ 
                  <div class="row">
                      <div class="col-lg-12">
                          @*<h1>Hello Home Controller</h1> // old code *@
                          <div id="userResultRegion"></div> @*  new region  *@
                      </div>
                  </div>
              </div>
              
              @*  new template  *@ 
              <script type="text/template" id="userResultsViewTemplate">
                  This is the userResultsViewTemplate.
              </script>

      Next, we create need to modify our MarionetteApp to include the new region, create a UserResultView, and show this view in the region:

      class MarionetteApp extends Marionette.Application {
          navbarRegion: Marionette.Region;
          userResultRegion: Marionette.Region; // new region
          constructor() {
              super();
              this.on("initialize:after", this.initializeAfter);
              this.addRegions({ navbarRegion: "#navbarRegion" });
              this.addRegions({ userResultRegion: "#userResultRegion" }); // new region
          }
          initializeAfter() {
              var navBarButtonCollection: NavBarButtonCollection = new NavBarButtonCollection(
                  [
                      { Name: "Home", Id: 1 },
                      { Name: "About", Id: 2 },
                      { Name: "Contact Us", Id: 3 }
                  ]);
              var navBarView = new NavBarCollectionView({ collection: navBarButtonCollection });
              navBarView.on("itemview:navbar:clicked", this.navBarButtonClicked);
              this.navbarRegion.show(navBarView);
      
              var userResultView = new UserResultsView(); // create the new view
              this.userResultRegion.show(userResultView); // show the view
          }
          navBarButtonClicked(itemView: Marionette.ItemView, buttonId: number) {
              alert('Marionette.App handled NavBarItemView clicked with id :' + buttonId);
          }
      }

      If all goes well, we should see the new template displayed on the page:

      image

      Using a Marionette.CompositeView as an ItemView:

      Now that we have the top-level view rendering correctly, lets create another CompositeView to serve as the view for each user in our UserResultCollection.  Simply create another CompositeView named UserResultItemView, give it a template, and then set the parent itemView property to the new class name as follows:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              super(options);
              this.itemView = UserResultItemView; // set the child view here
          }
      }
      
      // new ItemView class
      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate"; // new template
              super(options);
          }
      }

      Next, create the html for userResultItemViewTemplate in the Index.cshtml:

              <script type="text/template" id="userResultItemViewTemplate">
                  This is the userResultItemViewTemplate for : <%= UserName %>
              </script>

      Finally, construct and fetch a new UserResultCollection in the MarionetteApp, and pass this collection to the UserResultsView:

              var userResultCollection = new UserResultCollection();
              userResultCollection.fetch({ async: false });
      
              var userResultView = new UserResultsView({ collection: userResultCollection }); // pass in the collection
              this.userResultRegion.show(userResultView);

      Running our app now will render an item for each element found in our UserResultCollection:

      image

      Rendering nested Backbone.Collections.

      Cool.  So now we need another ItemView to render our RoundScores per user ( this is the nested collection within our Users collection.. All we need is a ResultItemView to render a single RoundScore, then set the parent itemView property to our new child view, exactly as we did before.  Remember to create an html template in our Index.cshtml a well:

      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate"; 
              super(options);
              this.itemView = ResultItemView; // set the child view here
          }
      }
      
      // new ResultItemView class
      class ResultItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#resultItemViewTemplate"; // new template
              super(options);
          }
      }

      Index.cshtml:

              <script type="text/template" id="resultItemViewTemplate">
                  This is the resultItemViewTemplate
              </script>

      Running our app now should show three ResultItemView s per user, right ?

      image

      Ok, so what went wrong ?

      In order to use an ItemView, the composite view needs it’s collection property set correctly.  Remember that when we instantiated the top level view, we passed our collection in the constructor:

      var userResultView = new UserResultsView({ collection: userResultCollection }); // pass in the collection

      Each item in this collection creates a new instance of the UserResultItemView class, and passes it the model to render.  So all we need to do is to set our collection property in the constructor, and create our own internal collection based on the incoming model.  Before we do this, however, lets just create a quick collection to hold RoundScores.  In /models/UserResultModels, create a new collection named RoundScoreCollection to hold RoundScore models as follows:

      class RoundScoreCollection extends Backbone.Collection {
          constructor(options?: any) {
              super(options);
              this.model = RoundScore;
          }
      }

      Now we can modify the constructor of UserResultItemView to set the collection

      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate"; 
              super(options);
              this.itemView = ResultItemView; 
              // set internal collection:
              this.collection = new RoundScoreCollection(options.model.RoundScores);
          }
      }

      Running the app now will render correctly:

      image

      Using CompositeView properties to generate html

      One of the advantages of using Marionette.Composite views is the ability to control the rendered html.  Let’s update our template html and Views to render results in a table. 

      Firstly, modify the html template for userResultsViewTemplate to create a <thead> and <tbody> as follows:

              <script type="text/template" id="userResultsViewTemplate">
                  <thead>
                      <tr>
                          <th>UserName</th>
                          <th>1</th>
                          <th>2</th>
                          <th>3</th>
                          <th>4</th>
                      </tr>
                  </thead>
                  <tbody></tbody>
              </script>

      Obviously, we need to wrap this html in a topmost <table> tag – and render our child views within the <tbody> html region.  These two settings are made in the UserResultsView:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              options.tagName = "table"; // outer tag
              options.itemViewContainer = "tbody"; // itemview container
              super(options);
              this.itemView = UserResultItemView; 
          }
      }

      Now update the html template for userResultItemViewTemplate to wrap the UserName property in a <td> tag, and set the outer tagName for the UserResultItemView to <tr>:

              <script type="text/template" id="userResultItemViewTemplate">
                  <td><%= UserName %></td>
              </script>
      class UserResultItemView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultItemViewTemplate";
              options.tagName = "tr"; // outer tagname
              super(options);
              this.itemView = ResultItemView; 
              this.collection = new RoundScoreCollection(options.model.RoundScores);
          }
      }

      Finally, update the html resultItemViewTemplate to render TotalPoints in a div, and update the tagName to use <td>:

              <script type="text/template" id="resultItemViewTemplate">
                  <div><%= TotalPoints %></div>
              </script>
      class ResultItemView extends Marionette.ItemView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#resultItemViewTemplate";
              options.tagName = "td"; // outer tagname
              super(options);
          }
      }

      Running our app now will create a table as follows:

      image

      Using bootstrap styles in a Marionette.CompositeView

      Finally, lets add some classes to our CompositeView to use bootstrap styles to render the table.  Update the UserResultsView and specify the className property:

      class UserResultsView extends Marionette.CompositeView {
          constructor(options?: any) {
              if (!options)
                  options = {};
              options.template = "#userResultsViewTemplate";
              options.tagName = "table";
              options.className = "table table-hover"; // inject a class 
              options.itemViewContainer = "tbody"; 
              super(options);
              this.itemView = UserResultItemView; 
          }
      }

      Running the app now gives us our final result.  Rendered html based on nested Json from a WebAPI DataController:

      image

      And that wraps it up.

      Have fun,

      blorkfish.