Tuesday, March 19, 2013

FluentSiteMap

I'm fairly excited to announce my first real OSS contribution.  It's a library called FluentSiteMap and is useful in ASP.NET MVC web applications for building website site maps, something that was a little more built-in with classic ASP.NET.  I built this library about a year or so ago when working on a side project and ever since have wanted to get it out there in the OSS community, but just never made the time (and yes, it did take a considerable amount of effort).

The project is out on GitHub and even has a NuGet package so it can be easily used by web developers.  Full documentation is currently in the README file on the GitHub page.

In the process of preparing FluentSiteMap for the OSS world, I also published two more projects, both used by FluentSiteMap:

So feel free to check it out and contribute to the projects if you like!  Since the original code is slightly dated, FluentSiteMap is actually based on ASP.NET MVC2, so right out of the box it could use an upgrade to the newer versions of MVC (although I think it works fine in MVC3/4).  Version 1.0.0 also shipped with a bug (imagine that!) that I felt was low enough priority not to delay the initial OSS release.  You can see the list of current issues here.


Wednesday, July 25, 2012

Elegantly importing photos from Android to your iPhoto library

Man, it's been a long time since I did a blog post.  I usually finally do one when I've been beating my head against the wall, trying to figure something out, and finally arrive at a workable solution, one that could hopefully benefit at least one other miserable person like myself.  Well, I finally hit on one of those today.

So for a long time my wife has had a simple request: I want to be able to import the photos I have on my Android phone (Samsung Galaxy S2) into my iPhoto library on my Mac:


For some reason our Android phones have never been able to do this.  If you plug them into our Macbook via a USB cable or even remove the micro SD card and connect it directly none of the pictures show up in the iPhoto import screen.  Other people I'd talked to had no issues doing this.  Why me?  Grr.  This came to a head when my wife picked up a new iPad the other day and she wanted the ability to manage pictures and do this importing through that as well.

Well it turns out I'm not alone in this issue after finally doing some research.  At some point Android stopped being strict on how it followed the DCIM digital camera specification for storing photos.  Unfortunately iPhoto is strict about it and for good reason: you don't want it importing every photo it finds on your Android when you plug it in - just ones that reside in the DCIM directory and adhere to the DCIM naming convention.  Some Androids adhere to this spec while many other don't (including the Galaxy S2).  Unfortunately if you're in the later camp, there's no good way to make the direct import approach work.  There are a couple apps on the Android market that will force your photos to be strict DCIM but they're somewhat clunky and don't work on newer version of Android (like ICS).

So the solution I finally found that worked was to use indirect sync.


I found the Android app called DropSync which syncs one or more individual directories on your Android to a DropBox folder.  Therefore you can sync your camera folder on your Android (ex: /sdcard/DCIM/Camera) with a new folder on DropBox (ex: /Photos/MyAndroid).  The free version limits you to files < 5MB but the paid is unlimited and worth the couple bucks.  This is a two-way sync app so that files deleted in DropBox will get deleted on the Android making it super easy to keep the camera directory clean without having to do anything on the Android itself.

So the workflow is:
  • Take a pictures on your Android phone
  • DropSync detects the new files and automatically syncs them to DropBox (assuming you're on a wifi network - if not, it gets synced the next time it is - wifi-only sync is configurable)
  • Importing pictures into your iPhoto library on your Mac (or PC):
    • Open your local DropBox folder
    • Browse to the photos directory that's being sync'd with your Android
    • Select all the pictures you wish to import into iPhoto
    • Drag them into iPhoto and import them
    • Optionally (but recommended) delete the imported photos and undesirable photos. This will delete them from your phone as well the next time DropSync performs a sync.
  • Importing  pictures into your iPhoto library via your iOS device (ex: your wife's iPad):
    • Open the DropBox app
    • Browse to the photos directory that's being sync'd with your Android
    • Click on each photo and save it to the device's photos (which goes into the Camera Roll).  Unfortunately you can't do bulk save, but one at a time isn't too bad.
    • Optionally (but recommended) delete the imported photos and undesirable photos.  This will delete them from your phone as well the next time DropSync performs a sync.
    • Optionally touch up the pictures in the Photos or maybe iPhoto app
    • Sync the iOS device with iTunes on your Mac (or PC) to finally import them into your iPhoto library
Bonus:
DropBox allows you to share folders so, for example, the husband's Android camera pictures folder can be shared with the wife's and she can perform the above process for both devices in one place.  This could essentially be done on any number of devices until you hit your DropBox storage limit.

UPDATE (7/26/12):
A friend of mine pointed out that the DropBox Android app (as well as the iOS app) has a feature that uploads your camera photos for you automatically to a folder in DropBox called /Camera Uploads.  The pros to this approach vs. using DropSync is that DropBox actually gives you more storage (up to 3GB I think) as you add more and more pictures/videos, increasing your overall storage limit for free.  However the con is that the DropBox upload feature is only a one-way sync.  Therefore if you delete the photos from your the DropBox folder they won't automatically be deleted from your phone.  For me, the two-way sync is key to the simplicity of the solution, but others may prefer to just use the DropBox feature instead.


Wednesday, April 21, 2010

GeeTasks Pro

I’m a bit of a personal productivity junky and it seems I’m always trying out the latest time-management apps on computers and my iPhone.  Well, I think I’ve finally settled on one I really like: GeeTasks Pro.

GeeTasks is basically a rich GUI for Google Tasks, a TO-DO list in the Google cloud that integrates with GMail.  So the nice thing is you can manage your tasks on any computer with a web browser and then use GeeTasks when you’re on your iPhone (or iPod Touch).  Google also has an iPhone web interface for Google Tasks, but GeeTasks is much more functional and, since it caches all your task data locally, it works when you’re not connected to the internet!

The really cool thing about GeeTasks is it’s so easy to use (in some ways, more so than the Google Tasks web app).  The developer is also actively improving it and even added features based on my personal suggestions!

So if you’re looking for a nice anywhere TO-DO app solution, I highly suggest you check it out!

Friday, October 16, 2009

First Snow

This picture was just too cute not to post. Here’s Brinna, my 1-year old daughter about a week ago when we had our first snow of the year. This is her first experience with snow, at least as a walking human being. I’m sure she doesn’t remember it from last winter. You just have to wonder what’s going through her mind right at this moment.

Probably something like: “Why the heck are we getting snow in early October?!?”

Sunday, August 23, 2009

Using a Mobile Source Code Repository

corona_laptop

If you’re a consultant or just do a lot of coding on the side either for profit or to keep your skills sharp, it’s often nice to be able to store your code in a source code repository. The benefits of a personal repository are almost the same as what you get with a repository shared by a team of developers (history-tracking of your code, the ability to branch and merge, easy roll-back of uncommitted changes), the only different is that you’re the only user.

But setting up a reliable source code repository is not trivial, especially if you want to be able to access it while on the go. The easy route is to pay for a hosted solution, but that requires an internet connection (at least for doing source code operations) and they usually cost money if you want something decent.

I’ve come up with a couple simple approaches that allow you to have a personal repository that you control, that can be accessed from anywhere whether you have an internet connection or not, and that can get backed up in the event of data corruption.

Virtualization

Before I jump into the details, I’d like to say a couple things about virtualization since it is a big component of the solutions I came up with (especially the second). While you can easily code using tools installed directly on your host OS, I highly recommend isolating your software development environments on one or more virtual machines (VM’s). There are several reasons for this:

  • It allows you to develop on operating systems that are more comparable to what your application may be running on in production. Most workstation OS’s (like Windows Vista or Windows 7) do not come with all of the server-side components or are just not quite configured the same as the server OS your app may be running on in production (Windows 2000 Server, Windows Server 2003, Windows Server 2008, etc). Some server-side application services (like SharePoint) won’t even run on a non-server OS.
  • It allows you to have multiple different development environments. If you have one project or client that uses Visual Studio 2005 and another that uses 2008, it’s safer to run them on separate operating systems to prevent interference.
  • It allows you to easily play with new beta development tools without hosing up your working development environments. Nothing like installing the latest beta of VS2010 and finding out VS2008 no longer loads your paying customer’s solution.
  • If your host OS is not Windows (ex: Mac OSX) and you want to develop Windows applications, you don’t have much of a choice!

Which virtualization option you go with is pretty irrelevant. For the longest time, I was a big user of Microsoft’s Virtual PC. It was free and it worked great. Since then I’ve moved to VMWare’s Workstation and Player on Windows and Fusion on the Mac. VMWare seems to have a slightly more superior product at the moment, but Microsoft’s virtualization offering is not too far behind, especially when you consider the new functionality available in Windows 7.

Source Control System

There are a few good choices out there for source control.  Personally, I like Subversion (SVN) the best.  It’s free (open source) and a lot of very useful third-party tools are out there.  TortoiseSVN is an excellent Windows GUI client and is free.  AnkhSVN (now being developed by Collabnet, the company that hosts the SVN source control itself) provides source control integration into Visual Studio and is also free.  Finally, there’s VisualSVN.  They make a free server application called VisualSVN Server that allows you to get a Subversion repository up and running in minutes on a Windows server, complete with HTTPS access over Apache.  VisualSVN also makes a Visual Studio source control add-in that’s comparable to AnkhSVN, although arguably more robust.

The two solutions I came up with (Method 1 and Method 2 below) both use VMWare Workstation for the virtualization platform and Subversion for source control.  I’m sure one could adapt these steps fairly easily to work with other systems since the concepts behind them are what are important.

Method 1: Single VM with a Local Repository

When I first had the need to manage my own source control, I had a single VM I used to work on my code. 

single-vm

The source control solution in this case was quite simple since I could pretty much put everything on that VM.

Install a local repository

First I used TortoiseSVN to create a local file-based repository.  You can use the command-line Subversion tool to do this as well, but TortoiseSVN makes it easy:

 create-local-repository

I usually put the repository itself in a standard location (like C:\SVN) so the repository URL in my Working Copy becomes something like file:///C:/SVN/my-repository:

checkout

Configure repository backup

You can do this a number of ways, but you want to make sure that whatever is backing up the repository is doing it automatically on a scheduled basis, so you don’t have to think about it.  An online backup service (like Mozy.com) works well, but there’s a monthly service charge.  I do all my backups on my wife’s computer, which is a MacBook.  Therefore I can use TimeMachine, which is integrated into Mac OS, to backup everything important, including my source code repository.

However, my repository is still sitting in the file system of my VM which isn’t hosted on the MacBook – it’s on my laptop, which is running Windows.  To sync the repository files to the MacBook so they can get backed up I use Windows Live Sync (used to be called FolderShare).  It’s a free file syncing tool from Microsoft that can sync between multiple PC’s and Mac’s via your local network or over the internet and even through firewalls.  All you need is a Windows Live ID to get started.

So as long as all my source code is developed on a single VM, I can keep the source control repository locally on that VM as a simple file-based SVN repository and use Live Sync and TimeMachine to keep my repository files backed up.  Also, if I go mobile and/or the MacBook isn’t connected to the internet (so Live Sync can’t sync) I still have access to the repository so I can do all the source control operations I may need to keep developing code.  Then the next time my VM is able to connect the Mac, it will sync the repository files so they can be included in the next backup.

Method 2: Multiple VM’s with a Shared Repository

The above solution works great until you decide to write your code on more than one VM.  Perhaps you have a couple projects going that require different development stacks.  Maybe one is an ASP.NET application running on IIS6 (Windows Server 2003) and another is on IIS7 (Windows Server 2008).  In that case you really should develop your applications on separate VM’s, each running the correct version of Windows.

second-dev-vm

Of course, now the problem is accessing the file-based source control repository across multiple VM’s.  One option is to move the SVN repository files to the host OS.  Most virtualization solutions like Microsoft Virtual PC and VMWare Workstation have a file sharing feature where a VM can access files on the host OS.  While this works, there’s a very noticeable performance hit.  The other drawback to putting the SVN repository on the host OS itself is that it’s mixing a part of your development environment infrastructure into the host, which conflicts with why you moved everything to virtual machines in the first place.

A better approach is to move the repository to a dedicated VM that can act as a light-weight server and handle source control operation requests from all the client VM’s.  Subversion can be run as a server in this way.  It uses Apache to handle HTTP requests from clients and it seems perform nearly as well as with the local file-based repository.

Let’s take a look at how to set all this up.

Create a source control server VM

The first step is to create this server VM which will host our shared repository and handle client SVN requests from the other VM’s.  This VM won’t require a lot of RAM since it’s primarily just going to serve up request for source control operations.  I created mine using Windows Server 2008, which has a minimum memory requirement of 512MB, so that’s what I went with.

server-vm

Before we move on to installing additional software on the source server VM, we need to take care of some necessary networking infrastructure.

Create a private virtual network to connect all the VM’s

One of the other advantages of most virtualization systems out there is the ability to create virtual networks.  If you have multiple VM’s that need to communicate with one another and especially if that communication needs to be private (or doesn’t need to occur over a public network), you can create a virtual network.

In my situation, I ended up using one of the available custom networks installed by VMWare Workstation (vmnet2 in my case).  You don’t want to use the Host-Only, Bridge, or NAT networks as we need something that’s private that we can dedicate to these VM’s.  In my case, to turn this on, I just added a network adapter to all the VM’s, including the server VM, and connected it to the vmnet2 virtual network:

virtual-network

This new network adapter showed up in the OS (Windows) of all the VM’s.  To make them easier to manager, I renamed them from “Local Area Connection” and “Local Area Connection 2” to “Public LAN” and “Private LAN” in each VM:

Windows-network-adapters

Set up DNS and DHCP services on the server VM

Simply connecting the VM’s together with our new private network connection doesn’t make it so they can communicate (at least not very well) over the private network.  It’s the equivalent to connecting a bunch of computers to a Ethernet hub or switch with network cables.  Each host will default to a 169.254.x.x address and name resolution won’t work at all if each OS has their default firewalls turned on.

Ideally, we need some way of handing out IP’s and ideally having DNS name resolution to those IP’s (the most important one being the server).  To do that we first need to pick an IP subnet and a static IP for the server VM.  In my case I went with 192.168.76.x for the subnet and 192.168.76.1 for the static IP.  I then set the IP address of the “Private LAN” network adapter on the server VM to that static IP, which is done in Windows itself:

static-ip-config

Next we need to turn on DNS services on the server VM so it can resolve DNS names local to our virtual network.  After installing the service, I created a forward lookup zone for a private DNS domain called “ts-local”.  You can use whatever private DNS domain name you want as long as it isn’t something that might conflict with domain names on the internet or any other network you may connected to.

Then I added an A-host for the server VM itself whose hostname is “MCP”:

dns-server

Next we need to enabled the DHCP server service on the server VM so we can hand out the IP’s in our subnet.  Once you install the service, configure it to hand out IP’s to our subnet as well as specify the DNS server IP (same as the server VM) and the domain name (in my case “ts-local”):

dhcp-server

Finally, we should be able to release and renew the IP address on each of the client VM’s and get an IP in our subnet.  We should also be able to resolve the IP address of the server VM from the client VM’s using PING:

ping

Install VisualSVN Server on the server VM

Now for the real reason we created this server VM in the first place.  Setting up a Subversion server that uses Apache (the web server that SVN uses) by hand is no easy task (trust me, I’ve done it).  VisualSVN Server makes this a snap.  It will install Apache, configure HTTPS, and create a location for the repositories.  By default, VisualSVN Server creates the repositories directory at C:\Repositories.  I prefer C:\SVN.

visualsvn-server-manager

After the install is complete, all you have to do is create a username for your client VM’s to use to access the repositories.

Move the repository

If you started with Method 1 like I did, you need to move the file-based repository from the single VM to our shiny new Subversion server VM.  VisualSVN Server makes this easy with it’s import option:

import-repository

Simply browse to where the repository files are and perform the import.  You may want to copy them over to a temporary directory on the server VM first to make them easily accessible from the server’s file system.

Once the repository has been imported, you’ll want to set assign the user account you created earlier to have read/write access to the repository:

asign-user

Your server is now ready to service source control requests!

Point the Working Copies to the new repository

If you already have Working Copies (the term Subversion uses for directories that are under source control) on your original development VM, you can point them at the new URL using TortoiseSVN’s Relocate function.  For example, if you had a working copy connected to file:///C:/SVN/my-repository, you can now point it to http://mcp.ts-local/SVN/my-repository:

relocate-1

relocate-2

Configure repository backup

The final step to getting the multiple-VM source code repository operational is to configure the backup.  The approach is the same as with the single-VM approach – you just have to back up the repository files on the server VM instead.  In my case, that directory is C:\SVN.  I point Live Sync there which syncs the files to my MacBook which in turn backs everything up via TimeMachine.  Very nice!

The multi-VM solution also has the same disconnected benefits that the single-VM solution does.  You can perform all your source code operations anytime you want, even if your host machine is disconnected from the network.  The next time you are connected, Live Sync will sync any repository file changes to the MacBook so they can be backed up.

Saturday, July 25, 2009

Turn off Push with a Jailbroken iPhone 2G on T-Mobile with no Data Plan

So this tip probably applies to about 3 people out there besides me, but the road to a solution was about three days of pain, so I feel compelled to post something.

I've been a happy iPhone user since the beginning of 2009 when my manager at work sold me his 2G. A buddy of mine showed me the ropes of jailbreaking and unlocking the iPhone and before I knew it I was running iPhone OS 2.2.1 via QuickPwn on the T-Mobile network. Better yet, I didn't have to have a data plan (yes, I'm cheap), like you do with AT&T. I just hopped on an available WiFi network whenever I needed to use data - life was good.

Until I decided it would be a fun idea to upgrade my iPhone to the new 3.0 OS.

The actual install itself (I used redsn0w) was quite painless and relatively easy. After I got all my apps re-installed and my settings back to what I had before, I had a sweet phone that couldn't receive incoming calls or text messages. After several re-installs of the OS, attempting to use trial and error to narrow down the source of the problem, I finally stumbled upon the answer. And, yes, I tried googling the heck out of the internet and could not find another poor soul out there who was in my same predicament (hence the first sentence of this post).

The source of the problem was that I had Push turned on (which is the default when you install OS 3.0). Push is a technology that allows applications like email and calendar to receive updates from the server without having to continuously poll the sever. Polling like this is also called Pull, the opposite of Push. Push requires a data plan to work. I don't have a data plan with T-Mobile. I guess Push doesn't behave well without one. Kinda makes sense since I'm sure the iPhone developers didn't really think about this scenario since I think you're required to have a data plan with your iPhone if you're with AT&T.

Anyway, to turn Push off, here's where you go in the iPhone settings:

Settings > Mail, Contacts, Calendars > Fetch New Data > Push > Off

After I did that, everything works! And I have to say the 3.0 version of the iPhone OS is a nice upgrade, even on a 2G.

Wednesday, February 18, 2009

Using the Repository Pattern with CSLA.NET, Part 2

In my last post, we built an abstraction for a data access layer used by CSLA.NET business objects based on the Repository Pattern.  We started by creating two base interfaces, IRepository<TContext> and IContext and then used them to define a set interfaces that abstracted the data access for a simple order entry system.  Those interfaces were:
  • IOrderRepository - represented the repository for orders and line items; a factory object that can create the other repository objects
  • IOrderContext - the context that the IOrderRepository creates that performs the actual data access
  • IOrderInfoDto - a DTO for order summary entities
  • IOrderDto - a DTO for orders
  • ILineItemDto - a DTO for order line items
We then demonstrated how objects based on those interfaces would be called from the data access areas within actual CSLA.NET business objects (in the DataPortal_* methods).  We also showed how the repository object was injected in via dependency injection.  In our case, we used the Unity IOC Container but any IOC Container or dependency injection framework would do.
In this post we're going to build a concrete implementation of our abstract data access layer.  In other words, we'll build classes that implement the above interfaces.  In this case, we'll use LINQ to SQL as our actual data access technology.  However, given the high level of abstraction of our repository pattern implementation, we could conceivable build concrete implementations in many other technologies such as LINQ to Entities, <insert your favorite ORM here>, or even plain old ADO.NET.

LINQ to SQL Classes

Before we jump into the implementation of our interfaces, we need to first build our LINQ to SQL classes.  These classes will play an integral role in our concrete implementation.
LINQ to SQL requires that a physical database exist, so assume that we've started with a simple SQL Server database that contains an Orders table and a LineItem table with a one-to-many relationship between them.  We can then add an Orders.dbml file (LINQ to SQL Classes item) to a Visual Studio 2008 project and drag both tables onto the design surface:
Orders-dbml-screenshot
Visual Studio (actually LINQ to SQL) does a lot of work behind the scenes when we do this.  It code generates a LINQ to SQL DataContext class called OrdersDataContext and two entity classes called Order and LineItem that map to the database tables.
You may also notice is the screen shot above that we've added a set of stored procedures to our database that perform all the inserts, updates, and deletes, and they show up as methods on the OrdersDataContext class.  While LINQ to SQL entity classes have the ability to perform these operations on their own, they have a limitation that forces us to use a alternative mechanism, like stored procedures, instead.  We'll investigate this issue further a little later on in the post.
At this point, the OrdersDataContext, Order, and LineItem classes could be used by a set of CSLA.NET business objects to perform all the required data access.  However, the business objects would be be tightly coupled to the LINQ to SQL code and therefore there would be no easy way to abstract the LINQ to SQL code so it could be mocked for unit testing purposes.  Let's see how we can migrate this code into a concrete implementation of our repository pattern abstraction.

Concrete DTO's

Before we jump into the concrete repository and context objects, let's take a quick look at how we implement the DTO's.  LINQ to SQL makes this relatively easy since the entity classes code generated by LINQ to SQL are the DTO's.  All we need to do is make them implement our DTO interfaces.  Since the LINQ to SQL code generation is done using partial classes, this is quite easy.
First the, concrete IOrderDto class:
partial class Order
    : IOrderDto
{

    byte[] IOrderDto.Timestamp
    {
        get
        {
            return this.Timestamp.ToArray();
        }
        set
        {
            this.Timestamp = new Binary(value);
        }
    }

    IEnumerable<ILineItemDto> IOrderDto.LineItems
    {
        get
        {
            return this.LineItems.Cast<ILineItemDto>();
        }
    }

}
At a minimum, we need to define the partial class Order (which binds to the code-generated Order class at compile time) that implements the IOrderDto interface.  But we also need to add a couple explicit IOrderDto property implementations.
The first is due to the fact that the Order class that was code-generated by LINQ to SQL has a Timestamp property that is of the LINQ to SQL-specific type Binary.  However, the IOrderDto interface defines the Timestamp property as a byte array, which is not specific to a data access technology.  Therefore we need to add the IOrderDto.Timestamp property explicitly and marshal the Binary and byte array values back and forth.
The second explicit property implementation is IOrderDto.LineItems.  The Order class code-generated by LINQ to SQL also defines a LineItems property, but it's of type EntitySet<LineItem>.  Therefore, we need to convert between the two and a handy way to do it is to use the Cast extension method.
The concrete ILineItemDto class is very similar, but we only have to add an explicit implementation of its the ILineItemDto.Property:
partial class LineItem
    : ILineItemDto
{

    byte[] ILineItemDto.Timestamp
    {
        get
        {
            return this.Timestamp.ToArray();
        }
        set
        {
            this.Timestamp = new Binary(value);
        }
    }

}
Now, with the concrete DTO's defined, let's move onto the concrete repository and context classes.

Concrete Order Repository

You may recall the role of the repository object is to be a factory for all of the other objects needed by the data access layer.  It's main job is to create the associated context.  Therefore our order repository will need to create an order context.  It will also need to be able to create any DTO's required by data access methods that require DTO's as inputs. 
Given that, here's our concrete OrderRepository class:
public sealed class OrderRepository
    : IOrderRepository
{

    IOrderContext IRepository<IOrderContext>.CreateContext(bool isTransactional)
    {
        return new OrderContext(isTransactional);
    }

    IOrderDto IOrderRepository.CreateOrderDto()
    {
        return new Order();
    }

    ILineItemDto IOrderRepository.CreateLineItemDto()
    {
        return new LineItem();
    }

}
A fairly simple implementation of a factory.  The CreateContext method creates a new instance of our concrete OrderContext (which we'll see its implementation just ahead), passing a value to its constructor, telling it whether or not it needs to be transactional.  Then we have two methods for creating DTO's: CreateOrderDto and CreateLineItemDto.  Notice that what we're actually returning are instances of the two entity classes code generated by LINQ to SQL since they implement the required DTO interfaces.

Concrete Order Context

While the repository object is the factory that creates all the data access objects, the context object plays the star role in actually performing the data access operations.  Therefore, the OrderContext class is going to have the most meat of any of our concrete repository pattern classes.  Let's examine the OrderContext class in chunks since there's a lot going on. 

Basic Implementation of OrderContext

First, let's take a look at the class definition itself and its constructor that we know takes a isTransactional boolean parameter:
public sealed class OrderContext
    : IOrderContext
{

    private OrdersDataContext _db;
    private TransactionScope _ts;

    public OrderContext(bool isTransactional)
    {
        _db = new OrdersDataContext();
        if (isTransactional)
            _ts = new TransactionScope();
    }

}
As you can see, our OrderContext object wraps an instance of an OrdersDataContext object (via the _db field) which is a LINQ to SQL DataContext.  Therefore, our OrderContext object is essentially an abstraction of a LINQ to SQL data context.  When it implements the remaining IOrderContext interface members, it does this by making calls against that LINQ to SQL DataContext instance.
The OrderContext also wraps a TransactionScope object, which it only creates if the calling OrderRepository object specified that the context is transactional.  That transaction is committed in the CompleteTransaction method, which is required by the IContext base interface:
    void IContext.CompleteTransaction()
    {
        if (_ts != null)
            _ts.Complete();
    }
The last place we interact with this transaction is at the end of the order context object's lifecycle during the IDispose.Dispose implementation:
    void IDisposable.Dispose()
    {
        if (_ts != null)
            _ts.Dispose();
        _db.Dispose();
    }
We also dispose the OrdersDataContext which closes up the database connection.
So far with the OrderContext class we've implemented the creation and clean-up of the object.  Now we need to implement the methods defined by the IOrderContext interface that actually do the data access. 

IOrderContext.FetchInfoList Implementation

First, let's take a look at the implementation of the IOrderContext interface's FetchInfoList method:
    IEnumerable<IOrderInfoDto> IOrderContext.FetchInfoList()
    {
        var query =
            from o in _db.Orders
            orderby o.Date
            select new OrderInfoData
            {
                Id = o.Id,
                Customer = o.Customer,
                Date = o.Date
            };
        return query.Cast<IOrderInfoDto>();
    }

    private class OrderInfoData
        : IOrderInfoDto
    {
        public int Id { get; set; }
        public string Customer { get; set; }
        public DateTime Date { get; set; }
    }
The purpose of this method is to return a list of all the orders in the database.  They come back as the light-weight IOrderInfoDto objects.  Our implementation of this method performs a standard LINQ to SQL query against the Order entity objects in the OrdersDataContext.  However, we don't want to return all the data of each order.  The IOrderInfoDto object is only a subset of that data.  An easy solution is to perform a LINQ projection of Order objects to IOrderInfoDto objects.  This will generate only the T-SQL necessary to populate the data required by the IOrderInfoDto objects.  And of course we need a concrete IOrderInfoDto class to create and return; an easy approach is just to declare a private OrderInfoData class shown above just below the FetchInfoList method.

IOrderContext.FetchSingleWithLineItems Implementation

The next data access method required by the IOrderContext interface is FetchSingleWithLineItems:
    IOrderDto IOrderContext.FetchSingleWithLineItems(int id)
    {
        var options = new DataLoadOptions();
        options.LoadWith<Order>(o => o.LineItems);
        _db.LoadOptions = options;
        var query =
            from o in _db.Orders
            where o.Id == id
            select o;
        return query.Single();
    }
This method is similar to the previous in that it is another LINQ to SQL query.  However, it returns a single DTO (instead of a collection) and that DTO is an instance of the full Order entity class, which happens to implement the IOrderDto interface.
But we don't want to return just the data of the order itself.  We also want to return all of it's child line item data as well (which will be accessible via the LineItems property), and preferably all with one call to the database.  We can do this with a little LINQ to SQL magic by configuring the LoadOptions property of the OrdersDataContext, telling it that when it loads the data of an Order object, go ahead and load its child LineItem objects contained within the LineItems property.
The FetchInfoList and FetchSingleWithLineItems methods pretty much take care of all the data access querying.  Now we need to implement the insert, update, and delete operations. 

Insert, Update, and Delete Method Implementations

While the query method implementations simply took advantage of the built-in LINQ capabilities of the entity classes, we can't quite do the same with the insert, update, and delete methods.  Normally, with LINQ to SQL you can make whatever state changes you want to those objects and when you're ready to persist those changes back to the database, you just call the SubmitChanges method on the DataContext object, which keeps track of which entity objects have changed. 
While this approach could work in our situation, it requires us to maintain references to all of these objects between data access operations performed by our CSLA.NET business objects and this is a problem.  All CSLA.NET business objects need to be serializable so they can function properly in the CSLA.NET DataPortal.  This means that any objects contained within a CSLA.NET business object must also be serializable.  Unfortunately, LINQ to SQL objects are not.
There are some potential workarounds like one where you create a new LINQ to SQL entity object, load it with data, and then attach it to a DataContext as if it were previously fetched by the DataContext, but this is not using LINQ to SQL as it was intended and in many cases produces unexpected behavior.  The only reliable solution is to use a different mechanism than the LINQ to SQL entity objects and the DataContext to persist changes back to the database.  One of the easiest is the use of stored procedures.  This, in fact, is the same approach that Rocky uses with his LINQ to SQL data access code in his sample ProjectTracker application.  In the Orders.dbml file in our project, we simply drag those stored procedures over from the database and LINQ to SQL adds them as methods to the OrdersDataContext.
The last thing we should mention about the insert, update, and delete methods is that, unlike the query methods which returned DTO's (or collections of them), these methods take DTO's as parameters (except for delete which typically only takes an ID).  Therefore, the caller (in this case the CSLA.NET business object) needs to be able to create an empty DTO and populate it.  Which is why the OrderRepository class has the DTO creation methods. 
So, without further delay, here are the insert, update, and delete method implementations for the order entities:
    void IOrderContext.InsertOrder(IOrderDto newOrder)
    {
        int? id = null;
        Binary timestamp = null;
        _db.insert_order(
            ref id,
            newOrder.Customer,
            newOrder.Date,
            newOrder.ShippingCost, 
            ref timestamp);
        newOrder.Id = id.Value;
        newOrder.Timestamp = timestamp.ToArray();
    }

    void IOrderContext.UpdateOrder(IOrderDto existingOrder)
    {
        Binary newTimestamp = null;
        _db.update_order(
            existingOrder.Id,
            existingOrder.Customer,
            existingOrder.Date,
            existingOrder.ShippingCost,
            existingOrder.Timestamp, 
            ref newTimestamp);
        existingOrder.Timestamp = newTimestamp.ToArray();
    }

    void IOrderContext.DeleteOrder(int id)
    {
        _db.delete_order(id);
    }
We also have insert, update, and deletes for the line item entities as well, but they are very similar, so I'll save them for the sample code download at the end.

Sample Application

To make all of this really gel, I wanted to include a fairly extensive sample application that demonstrates everything we've been talking about in the last three posts: dependency injection and the Repository pattern with CSLA.NET.  However, this sample goes a bit further than what we've talked about so far so I wanted to briefly discuss it but save the details for an upcoming post.
A lot of what we've been building here are mechanisms to abstract each layer of our application so they are more loosely-coupled and more testable.  This is why dependency injection is so useful and why patterns like Repository really help.  But the Repository pattern is really just a means to abstract the data access layer.  What if you wanted to abstract the business layer?  What if you wanted to write unit tests that tested the functionality in your UI layer in isolation so that you wouldn't have to build concrete business objects in your tests?  That's the one big additional thing that the sample application sets out to do.  In short, it does this by defining abstractions (in the form of interfaces) for each business class and pulls out the static factor methods into separate factory interfaces and classes.
So here's a quick rundown of the projects that are included in the sample application solution, which is called CslaRepositoryTest:

  • DataAccess - contains the abstract data access layer interfaces
  • DataAccess.SqlServerRepository - a concrete implementation of the types in DataAccess
  • BusinessLayer - CSLA.NET business layer objects
  • BusinessLayer.Test - unit tests that test the business layer, mocking out the data access layer
  • Gui - a Windows Forms GUI that uses the business objects in BusinessLayer.  The GUI uses a Model/View/Presenter-style architecture so it can be more easily tested.
  • Gui.Test - unit tests that test the GUI layer, mocking out the business objects
  • Core - contains common types not necessarily specific to a particular layer
Some other notes:

  • The BusinessLayer is compiled against CSLA.NET 3.5.
  • The sample application uses v1.2 of the Unity Application Block as an IOC Container.  The Gui project configures uses a file called Unity.config to configure Unity.
  • The DataAccess.SqlServerRepository project uses SQL Server Express to attach to a file instance of the database, just like the CSLA.NET ProjectTracker sample application does.  The database file is Orders.mdf and is located in the same directory as the solution.
  • Both test projects use NUnit as their unit testing framework.  You should be able to run all of the unit tests in the NUnit-GUI application.
  • Both test projects use Rhino Mocks v3.5 for mocking objects.
  • The binaries for all of the 3rd party dependencies mentioned above are included in a lib folder in the download.  The only thing you need to have installed is Visual Studio 2008 and SQL Server 2005 Express (which usual comes with Visual Studio 2008).
Finally, here's the sample application.  (UPDATE: Changed link to point to my GitHub repo)

Conclusion

In this post we walked through a concrete implementation of our abstract data access layer that uses the Repository pattern.  Our implementation used LINQ to SQL, but we could have easily created one that used any other data access technology.  In an upcoming post, we'll dig a little further into how to abstract the business layer itself.