Thursday 30 January 2014

Working with Entity Framework Code First and SQL FILESTREAM

Whilst looking around at these two technologies I couldn't find any information which was particularly useful about how to get the solution working.  So I wanted to put this up so that should anyone else want to try something similar then there's some (hopefully) useful information out there.

What am I trying to do?

The system that I am involved with at the moment has the need to store files with some of it's entities.  We've started out by using Entity Framework 5 Code First (although in this post I'll be working with Entity Framework 6) to create the data model and database mappings which is working nicely and is proving itself a very useful framework.

When looking at saving file data along with an entity there are a few choices:
  1. Store the file contents directly in the database
  2. Save the file contents to the file system
  3. Use a database table with FILESTREAM enabled
  4. Use a FILETABLE
The first option is fine for small files, but I don't really want to load all of the file data each time I query the table and the files might get pretty big.

The next option is a reasonable option and with recent versions of Windows Server we have a transactional file system.  So far so good, but it would be nice to not have to think about two persistence mechanisms.

FILESTREAMs were introduced in SQL Server 2008 and allow you to store unstructured data on the file system, so it feels like we're using the right tools for the job but they're all nicely in the same package.  The problem here is that Entity Framework doesn't support FILESTREAM.

Lastly there's FILETABLE which was introduced with SQL Server 2012.  This is like FILESTREAM, but rather than defining it at a column level you get a table created for you which provides information from the file system.  It is a really nice system, but it didn't quite fit with how we're structuring data and it's also not supported by Entity Framework.

So the option that I would ideally like to work with here is the FILESTREAM option as it gives me all of the database goodness but with the performance of the file system.  But there is just that minor sticking point of it not being supported by Entity Framework.  After a fair amount of playing around with the technology and research into the problem I figured that I could probably make it work by falling back to basic ADO.NET for handling the FILESTREAM part of the requests.  Whilst this was an option I didn't really want to start having different technology choices for doing database work, so the goal was to see how much I could get away with in Entity Framework.

Setting up the test solution

The database server

With SQL Server the default install options will not give you a FILESTREAM enabled server but you can enable it.  I'm not going to go into how with this post as Microsoft have some pretty good documentation available on how to do this.

This also means that we can't let Entity Framework create the database for us, so you will need to create an empty, FILESTREAM enabled database and point to that.

The outline of the project

I created the solution in Visual Studio 2013, and delving into the most creative parts of my mind I came up with a solution that has hotels, with rooms and multiple pictures of each room (okay, not my most creative moment, but it gets the job done).

So in this solution I have my data model which is pretty simple.  I have some locations, at each location there are a number of hotels, each hotel has rooms and each room has photos.

The Location, Hotel, Room and Photot entities
Solution Data Model
Of all of these the import one is Photo.  This entity has some basic properties, Title and Description, which describe the photo, then there's the navigation properties for getting back to the room and then lastly there's the Data property which is intended to hold the content of the file.  Normally Entity Framework would see this property and it's type (a byte array) and map it to an appropriately named column of type VARBINARY(max).  Whilst we could still let it do this, it would somewhat defeat the purpose of the exercise as we'd be storing the contents of the file directly in the database, so we need to add some configuration to tell Entity Framework to ignore this property when mapping.

Photo configuration information
Photo entity configuration
I'm using the Fluent API here, but you should be able to do this using Data Annotations as well.

At this point if we were to deploy the database we would get a table with no data information and a blank property in our entity.  What we need to do next before any of this is useful is to somehow get a FILESTREAM column into the Photo table.  The solution to this is to use Entity Framework migrations, the basics of which I'll not cover here and leave it as an exercise to the reader.

Migrations provides us with a migration class for each migration added to uplift and roll-back the changes to the database.  The useful method for us in this class is the Sql method which allows us to execute SQL commands; using this we can add our ROWGUID column and our FILESTREAM column with all the constraints we need (and of course the appropriate commands to remove it all again as well for the Down method).

Migrations code
Migrations SQL commands
Now if we run the Update-Database command from the Package Manager Console we get a table with all the right columns of the right types for being able to use FILESTREAM.

So that's half the battle won, the next challenge is being able to read to and write from the table.

Storing and retrieving file data

So how do we query data in a FILESTREAM column?  Well this is the bit where we fall back to the System.Data.SqlTypes namespace, specifically the SqlFileStream class.  We use this class to read the contents of the file back from the server as a stream, but this only works in the context of a SQL transaction.

So the first thing we need to do is get the file path and the SQL transaction information, we can then pass this to the SqlFileStream constructor to get our stream, after which it's just a case of reading from the byte array in our entity and writing to the SqlFileStream stream.  To get this information we need to run a custom SQL statement.  We could do this using a SqlCommand object, but I still want to stick to Entity Framework a bit more, fortunately there's the DbContext.Database.SqlQuery<TElement> class which we can use to run raw SQL statements, it also handles parameters so we can parametrize the query (great for guarding against SQL injection attacks) and it an enumerable collection mapped to TElement (which does not have to be a part of our data model).
Raw Data Query
Raw Data Query
The FileStreamRowData class here is a custom class with a string property for the path, and a byte array for the transaction context.

Running all of this inside of a transaction scope will get information required (the call to "First" will enumerate the collection) to pass to the SqlFileStream constructor, we can then use this to write data to the stream.
Writing to the FILESTREAM
Writing to the FILESTREAM
The same applies when writing to the database as well, but with the source and destination reversed.  Also when writing to the database you would need to save the entity first.  Wrapping up the Entity Framework bit in the same transaction scope means that even if you call "SaveChanges" on your context, if the transaction does not successfully complete then the changes are stilled rolled back.

So does it work?

Well, yes it does, and it works pretty nicely as well.  It's maybe not the final solution that I'll use as I'm still investigating a couple of other options, but it's certainly not a solution that I would be upset at using, and by hiding the complexity in the data services the client need never know how the file information is being held in the database or using which technologies.

How do I play with it?

You could probably work most of what you need out from this post, but for convenience sake I've also put up the whole solution onto GitHub, so feel free to head over and take a look.  If you come up with any suggestions or improvements then feel free to contribute.