Entity Framework Core Owned Types explained

Owned entity was made available from EF Core 2.0 onwards. The same .NET type can be shared among different entities. Owned entities would not have a key or identity property of their own, but would always be a navigational property of another entity. In DDD we could see this as a value/complex type. For those coming from EF 6, you may see a similarity with complex types in your model. But the way it works and behaves in EF Core is different. There are some gotchas you need to watch out for. We’ll explore these in detail here.

Let us work with a model shown below

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public class Student
{
public int Id { get; set; }

public string Name { get; set; }

public Address Home { get; set; }
}

public class Address
{
public string Street { get; set; }

public string City { get; set; }
}

Here Student owns Address which is the owned type and does not have its own identity property. Address becomes a navigation property on Student and would always have an one-to-one relationship (at least for now).

The DbContext would be defined like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class StudentContext : DbContext
{
public DbSet<Student> Students { get; set; }

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>()
.OwnsOne(s => s.Home);
}

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer("Server=(localdb)\\mssqllocaldb;Database=StudentDb; Trusted_Connection=True;App=StudentContext");
optionsBuilder.EnableSensitiveDataLogging();
}
}

An owned type cannot have a DbSet<> and OnModelCreating you can specify the Home property as Owned Entity of Student.

Home would be mapped to the same table as Student.

Let us fire up this model and see it working.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
using Microsoft.EntityFrameworkCore.Infrastructure;
using Microsoft.Extensions.Logging;

class Program
{
static void Main(string[] args)
{
var _context = new StudentContext();
_context.GetService<ILoggerFactory>().AddConsole();
_context.Database.EnsureDeleted();
_context.Database.EnsureCreated();

InsertStudent(_context);
}

private static void InsertStudent(StudentContext context)
{
var student = new Student
{
Name = "Student_1",
Home = new Address
{
Street = "Circular Quay",
City = "Sydney"
}
};
context.Students.Add(student);
context.SaveChanges();
}
}

I have added Microsoft.EntityFrameworkCore.SqlServer and Microsoft.Extensions.Logging.Console packages.
From the console logs we see that we have Students table created and a row inserted.

1
2
3
4
5
6
7
CREATE TABLE [Students] (
[Id] int NOT NULL IDENTITY,
[Name] nvarchar(max) NULL,
[Home_City] nvarchar(max) NULL,
[Home_Street] nvarchar(max) NULL,
CONSTRAINT [PK_Students] PRIMARY KEY ([Id])
);

To query, just get the students and the owned entity is also included.

1
var students = _context.Students.ToList();

We can also store Address in another table, which we can’t do with complex types in EF6. Simply call .ToTable() and provide a different name.

1
2
3
modelBuilder.Entity<Student>()
.OwnsOne(s => s.Home)
.ToTable("HomeAddress");

Now when you run the app, you would see 2 tables being created. Note the identity column for the HomeAddress table. It is referencing Students table’s identity.

1
2
3
4
5
6
7
8
9
10
11
12
CREATE TABLE [Students] (
[Id] int NOT NULL IDENTITY,
[Name] nvarchar(max) NULL,
CONSTRAINT [PK_Students] PRIMARY KEY ([Id])
);
CREATE TABLE [HomeAddress] (
[StudentId] int NOT NULL,
[City] nvarchar(max) NULL,
[Street] nvarchar(max) NULL,
CONSTRAINT [PK_HomeAddress] PRIMARY KEY ([StudentId]),
CONSTRAINT [FK_HomeAddress_Students_StudentId] FOREIGN KEY ([StudentId]) REFERENCES [Students] ([Id]) ON DELETE CASCADE
);

You can ignore properties which you do not want EF tracking.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public class Address
{
public string Street { get; set; }

public string City { get; set; }

public string State { get; set; } // ignore this
}

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>()
.OwnsOne(s => s.Home, (h) =>
{
h.Ignore(a => a.State);
h.ToTable("HomeAddress");
});
}

There are certain elements to keep in mind especially with change tracking. With EF core do not assume the same code of EF6 would give you similar behaviour. This is especially true with Change Tracking. In my view these changes are welcome and makes tracking more intuitive and easy to navigate.

When you use Add, Attach, Update or Remove on either DbSet<> or through DbContext, it effects all reachable entities. Here is what it would look like:

1
context.Students.Add(student);

This would also mark Address in a Added state.
But if you do not want to track all the entities in the graph:

1
context.Entry(student).State = EntityState.Added;

When you do this only the student is marked for insert and address is not. So how do you only change the state of address?

1
2
var address = _context.Entry(student).Reference(s => s.Home).TargetEntry;
address.State = EntityState.Unchanged;

When you mark an entity in the graph for update, all the properties are marked for update. In a disconnected (n-tier) scenario, you would need to track changes on your entity externally and let EF know about the changes. You need to have the original state of the entity and do some processing to know which properties were changed. Or you could go back to the database, get the entity and compare it’s state.

1
2
3
4
var entry = _context.Attach(student);
var dbValues = entry.GetDatabaseValues(); // gets only the student
entry.OriginalValues.SetValues(dbValues);
_context.SaveChanges();

This would only update those columns which had any changes on them. But it would only affect the student object and not the address. The address would still be in an Unchanged state. The above entry.GetDatabaseValues() would fetch only student values and not address. For you to track changes on address, you would need to explicitly check on its entity.

1
2
3
4
5
var entry = _context.Attach(student);
var adEntry = _context.Entry(student.Home);
adEntry.OriginalValues.SetValues(adEntry.GetDatabaseValues()); // gets home address
entry.OriginalValues.SetValues(entry.GetDatabaseValues()); // gets student
_context.SaveChanges();

Now on SaveChanges(), it would issue update on Address too if it found any changes.

Windows 10 Fall Creators update crashes App Pool

Windows 10 Fall Creators update was not yet available on my PC, so I manually pulled the update. The upgrade seems to have run fine except that when I tried starting one of my development services hosted on IIS, it did not. Instead I saw Service Unavailable HTTP Error 503. I checked the application pool assigned to this web site and it had stopped.

Under windows event log I saw this
IIS-W3SVC-WP(2307)

The worker process for application pool <Pool Name> encountered an error ‘Cannot read configuration file’ trying to read configuration data from file ‘\\?\<EMPTY>’, line number ‘0’. The data field contains the error code.

I knew for sure that the update had caused this issue as I was working on this particular web site just before the restart prompted me to close down my work.

I started looking at the user account under which I was running the app pool. It checked out fine. Next I just went and cleared off all the files under the Inetpub\temp folder. After restarting the services the web site came up without fuss this time.

I got curious since I had no idea what had caused the issue in the first place and started searching for support articles and came across this Web applications return HTTP Error 503 and WAS event 5189 on Windows 10 Version 1709 (Fall Creators Update)

This explained why I was facing the issue though the error message and the event logged was different. You also require to stop W3SVC service which the article missed(?) without which some files cannot be deleted and Remove-Item fails.

Solution
Stop “Windows Process Activation Service” and “W3SVC” service and clean out (delete) all the files under C:\Inetpub\temp\AppPools*. Start your services and the sites should be back to work.

WCF - One-way or the other

WCF One-way

I have always found WCF to be a great technology for many use cases. Before I ruffle anyone’s feathers out there, I love what Web API is capable of and if I am looking at providing HTTP services or anything which is targeted over internet, I would blindly choose Web API.
I am also eagerly waiting to see WCF service framework becoming a part of .NET Core. We already have the WCF client libraries available for the .NET Core version.
Having cleared that up, let me get back to one such use case for WCF, making a fire and forget call or one-way. This is useful when the client truly does not bother about the result or when it needs to kick off a process on the server and does not want to wait for it to finish, usually long running.
WCF’s comes with great variety, power and flexibility, but to truly harness it, one needs to have a deep understanding of its internals. You can use it out of the box without much mucking around, but sometimes it’s behaviour may not be obvious.

A quick recap on WCF one-way pattern.
The default behaviour of a service is request-reply pattern and to make it one-way you simply set the OperationContract as IsOneWay.

1
2
3
4
5
6
[ServiceContract]
public interface IOneWayService
{
[OperationContract(IsOneWay = true)]
void Process(int seed);
}

A few things to keep in mind when decorating an Operation as one-way.

  • The method has to return void
  • You cannot return Faults to the client. That means you cannot decorate the operation with FaultContract(typeof(Exception)) attribute.

Even if you unintentionally did the above on an one-way operation, the service would throw an error when attempting to start it.

Your service implementation is going to be nothing different here and so would the hosting part of it. So I won’t be delving in to it. You can simulate some load in it by doing a Thread.Sleep. So, would one get fire and forget operation form your client’s? It depends.
In fact it depends on quite a few things. First let us see what we can expect with our current implementation. Let me show you my client proxy first.

1
2
3
4
5
6
7
public class OneWayProxy : ClientBase<IOneWayService>, IOneWayService
{
public void Process(int seed)
{
Channel.Process(seed);
}
}

I like to hand-code my proxy / client for the service. I’ll probably keep that for another post. I also implement my clients different to what I have shown here, but this too is better than the freebie client you get from Visual Studio.

Let us look at the code calling our client.

1
2
3
4
OneWayProxy proxy = new OneWayProxy();
proxy.Process(5);
//proxy.Process(10);
proxy.Close();

The input parameter is just to make the service look important.

Here is what we see from running this implementation.

  1. The call to the proxy would be asynchronous (at the client). You can uncomment the next call and verify that. They would not block.
  2. Closing the channel might block. If the binding used is NetTcpBinding, by default it supports transport-level sessions. Which means the channel is kept open until the server completes processing all the client’s calls. If you use transport without a session, like basicHttpBinding, then the closing of channel would not block.
  3. The calls are dispatched synchronously on the service. Meaning, your next call would only get processed after completing the previous.

So what we learn is using One-way throws up a few surprises. It is fire & forget only for the operation calls. When you are having long running processes, you might not always want to wait for the operations to complete to close the channel. And Yes! You should always close the channel so that they are returned back to the servers pool.

Since most uses of WCF within the firewall would use or prefer TCP protocol over HTTP for speed and security and I would like to close my channel after each call is made,but not have it blocked, the above implementation would not be the most useful.

So what are our options here?
To close the proxy before the operation finishes -

  • We can use a session less transport, such as BasicHttpBinding or turn on Reliable session on NetTcpBinding. TCP binding provides reliability at transport level, but you get message level reliable session by enabling explicitly at the binding. But this comes at a cost overhead since a lot more messages are exchanged between the client and server to ensure this. This would also not give the best performance due to its chatty nature. <reliableSession enabled="true" />

  • You can mark the NetTcpBinding by turning off the session support. This is done by marking the binding as ‘oneWay’. This requires you to create a custom binding which turns off session support on NetTcpBinding. This would not block the channel from closing. In the below example we have adding tcpTransport support.

    1
    2
    3
    4
    5
    6
    7
    8
    <bindings>
    <customBinding>
    <binding name="onewayBinding">
    <oneWay />
    <tcpTransport />
    </binding>
    </customBinding>
    </bindings>

Remember to use the same binding at the client also.

By default all calls from a client would be processed synchronously. If more than one call were made from a client, they would be queued up on the server. The proxy would also be blocked from closing until all the processing is completed.

To enable concurrent processing of messages at the service-

With the above custom binding, it will also ensure that messages are processed concurrently. Which means each request would be processed on a different thread. Or you could again use basicHttpBinding.
However if you are using NetTcpBinding or session shaped binding, you should mark your service concurrency mode as Multiple.

Migration: Legacy application and trigger happy

Many of my major projects of my career have been in application migration involving rewrite using a new technology. I can think of only one instance where it was a big bang approach, where we built a new system and in one go replaced the other system. It wasn’t as simple as it sounded, but we could justify this approach and it worked well.
However every other time, we have built the new system invariably adding new features while keeping the old system running and accessible.
There have been many different approaches to this strategy. While in some cases, we would force the users to switch to the new application when available while disabling features from the old. Other times we would keep the old app running in parallel while making continuous releases on the new application. We have even had to maintain and service the old application to keep the business happy and sometimes to make the application gel with the new features being introduced in the newer application, especially when database changes were involed.

I’ll go through one such case where we had to rewrite a critical application while keeping access to the old system. We discovered that a lot of business functionality was kept in database triggers. There were many apps and I guess at that point someone decided to use triggers since they then needn’t touch any of the apps to add new features. You can read my take on business in database here.

We had decided to use DDD approach and this meant consolidating all those business rules in the trigger in their respective domains. The database being a common layer for both the applications, called for some strategy here.

The triggers couldn’t simply be disabled nor could we have them trigger when the new applications interacted with those underlying tables.

We need to restrict those triggers to only the old applications. The triggers will fire, no stopping that. But, we’ll stop the triggers body from continuing execution of the SQL. There are couple of ways of doing this.

  1. Using App Name

    This is the least impact way of determining if the trigger needs to continue execution or return. Providing an app name in the connection string ;app=YourApplicationName, would make it available in your SQL session. I also make it a practice to include this as it helps in profiling your database more easily. In your SQL (Trigger) you can now check

    1
    if (APP_NAME() = 'YourApplicationName') RETURN;
  2. Using a column to track the transaction source

    Depending on your situation, this might be intrusive or acceptable. If this is a new column, you might want it to have a default value so that older applications can use that. You should set a specific value from your new applications for each transaction it does with these specific tables. Check for these values in the trigger and exit from executions. Of course the trigger would execute completely for transactions from your older applications. With this column you can also gain some insight in to who is transacting with which app. Be sure to check correctly in inserted or deleted special table depending on the operation that activates the trigger.

In both the above methods, there would be no change in the legacy applications.

BTW, many of the triggers were good candidates for domain events.

I have used both approaches and they have worked well for me. If there are other ways of doing this, let me know in your comments.

Practical Patterns: Control with Builder

Builder-Director

There are many situations where one needs to construct a complex object before consuming it. You might need to pass many arguments, set configurations and many other mandatory properties before the instance is ready for use. Passing parameters through constructors is a sure way of ensuring that your objects are always in a proper state when created. But for complex objects having everything through constructors can be a bit cumbersome and restrictive too in some cases.

I will not be taking you through building the standard builder pattern, but one which implements a fluent interface to build a product. It would not just be chaining of methods, but will also ensure that domain rules are met for the product.

A quick review of the main features of the builder pattern itself before we start addressing its quirks and the alternate approach.

A builder always returns a concrete product unlike factory or abstract factory which return abstract products.

A builder encapsulates the construction of an object (product). You would use one when the process of constructing an object is complex. The builder only defines the steps required to construct the object.

It does not enforce any order in which they should be called. The director has to know this and constructs the object through the builder.

While the builder is being built, it is in a mutable state.

Let us consider Account as our product which requires to be built.

1
2
3
4
5
6
7
8
9
class Account
{
public int Number { get; internal set; }
public string Type { get; internal set; }
public IContact PrimaryContact { get; internal set; }
public string HomeBranch { get; internal set; }
// Optional
public IEnumerable<IContact> OtherContacts { get; internal set; }
}

We have IContact just to spice up this product.

1
2
3
interface IContact
{
}

Let us also define a concrete for this interface

Our typical builder interface would look something like this

1
2
3
4
5
6
7
8
interface IAccountBuilder
{
void SetNumber(int number);
void SetType(string type);
void AssignHomeBranch(string branchName);
void RecordPrimaryContact(IContact primary);
Account Build();
}

Most of the examples of the Builder out there would guide you to build a Director next, which takes in a Builder and returns the product. The director should now the order in which to call the methods in the builder. Most of the Directors do not accept any input from the user except the Builder itself. But if we need to accept inputs from the user (or program) to create our Product, the Builder would need to accept the inputs (as in our example) and the Director would also need to take in the inputs so that it could pass it over to the builder.

Instead we can have a Builder something like this

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
class AccountBuilder
{
private Account _account = null;
private List<IContact> Contacts {get; set;} = new List<IContact>();

public AccountBuilder Create()
{
_account = new Account();
_account.OtherContacts = Contacts;
return this;
}

public AccountBuilder As(string type)
{
_account.Type = type;
return this;
}

public AccountBuilder With(int number)
{
_account.Number = number;
return this;
}

public AccountBuilder In(string branch)
{
_account.HomeBranch = branch;
return this;
}

public AccountBuilder HavingAddress(IContact contact)
{
_account.PrimaryContact = contact;
return this;
}

public AccountBuilder AddingOtherAddress(IContact contact)
{
Contacts.Add(contact);
return this;
}

public Account Build()
{
return _account;
}
}

We can now create an Account instance like this

1
2
3
4
5
6
7
AccountBuilder _builder = new AccountBuilder();
var account = _builder.Create()
.As("Saving")
.With(123)
.HavingAddress(new Address("Sydney"))
.In("Sydney")
.Build();

The above Builder uses chaining of methods which constructs an Account class. Each method returns the same copy of its instance, allowing chaining of methods. Through this we can assign properties of the Account class. Finally, calling the Build() returns us the completed(hopefully!) Account class.

Even though the new builder is much easier to use, it comes with a lot of pitfalls. In order to use this interface, the developer would need to know the implementation details of each method. We cannot enforce assigning of mandatory properties. Knowing which are optional is also not intuitive. The client can also repeat assigning certain properties.

Let us try to remove some of the ways to fail by introducing certain guards. How can we ensure we always start with Create(), which allows for the instance of Account to be initialised.
We’ll make the constructor of AccountBuilder private and make the Create() a static method. The clients can now access AccountBuilder only through Create().

1
2
3
4
5
6
7
8
9
10
// In AccountBuilder
private AccountBuilder()
{
var _account = new Account();
_account.OtherContacts = Contacts;
}
public static AccountBuilder Create()
{
return new AccountBuilder();
}

Now our clients would use the builder like this

1
2
3
4
5
6
var account = AccountBuilder.Create()
.As("Saving")
.With(123)
.HavingAddress(new Address("Sydney"))
.In("Sydney")
.Build();

Now the only saving grace is that we can at least ensure the client always gets an Account instance, even if they may not be completely constructed. I would still be hesitant to ship this to developers. What we want is to enforce an order of construction and guide them until all the rules have been met before the Account product can be returned.

For this we can look at I of the SOLID principles. Through Interface Segregation Principle, I’ll show how we can have a water tight builder.
We shall isolate each method in a separate interface. While at this, we’ll also determine what should be the next possible call(s) from this method. This can be done by returning the next permissible interface.
Let us see this in code to make it more clear.

Based on our methods we’ll define the following interfaces.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
interface IAccountType
{
IAccountNumber As(string type);
}
interface IAccountNumber
{
IPrimaryContact With(int number);
}
interface IPrimaryContact
{
IOtherContact HavingAddress(IContact contact);
}
interface IOtherContact
{
IOtherContact AddingOtherAddress(IContact contact);
IHomeBranch NoMoreAddress();
}
interface IHomeBranch
{
IAccountBuilder In(string branch);
}
interface IAccountBuilder
{
Account Build();
}

Each interface returns only the next available interface. This allows the user to progress only through the steps which we have defined. We can also allow the user to skip optional parameters (check IOtherContact). Only when the client reaches IAccountBuilder will they be able to access Build() which finally returns Account instance.

The account builder will now look like this. It will implement all of the above interfaces.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
class AccountBuilder : IAccountBuilder, IAccountNumber, IAccountType, 
IPrimaryContact, IHomeBranch, IOtherContact
{
private Account _account = null;
private List<IContact> Contacts {get; set;} =
new List<IContact>();
private AccountBuilder()
{
var _account = new Account();
_account.OtherContacts = Contacts;
}

public static IAccountType Create() => new AccountBuilder();

public IAccountNumber As(string type)
{
_account.Type = type;
return this;
}
public IPrimaryContact With(int number)
{
_account.Number = number;
return this;
}
public IOtherContact HavingAddress(IContact contact)
{
_account.PrimaryContact = contact;
return this;
}
public IOtherContact AddingOtherAddress(IContact contact)
{
Contacts.Add(contact);
return this;
}
public IHomeBranch NoMoreAddress()
{
return this;
}
public IAccountBuilder In(string branch)
{
_account.HomeBranch = branch;
return this;
}
public Account Build()
{
return _account;
}
}

And we use it similar to before

1
2
3
4
5
6
7
account = AccountBuilder.Create()
.As("Saving")
.With(123)
.HavingAddress(new Address("Sydney"))
.NoMoreAddress()
.In("Sydney 2000")
.Build();

With optional address

1
2
3
4
5
6
7
8
account = AccountBuilder.Create()
.As("Saving")
.With(123)
.HavingAddress(new Address("Sydney"))
.AddingOtherAddress(new Address("Katoomba"))
.NoMoreAddress()
.In("Sydney")
.Build();

Doesn’t look much different, but the client cannot progress to the next method unless they go through the previous.
Depending on how you see this, this class is not immutable. If this is something you desire, then do not return the same instance of AccountBuilder, but a new one each time. You would need to maintain the state internally of AccountBuilder. Also create the Account only in the Build() method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class AccountBuilder : IAccountBuilder, IAccountNumber, IAccountType,
IPrimaryContact, IHomeBranch, IOtherContact
{
// Maintain state
private List<IContact> Contacts { get; set; }
= new List<IContact>();
private string Type { get; set; }
private int Number { get; set; }
// protect instantiation
private AccountBuilder()
{}

public static IAccountType Create() => new AccountBuilder();

public IAccountNumber As(string type) => new AccountBuilder()
{
Type = type
};

public IPrimaryContact With(int number) => new AccountBuilder()
{
Type = this.Type,
Number = number
};

public Account Build()
{
var account = new Account();
account.Type = Type;
account.Number = Number;
return account;
}

/// rest of the implementation left out for brevity
}

I have left rest of the methods out in the above example, but you should be able to get an idea out of it. I now internally maintain state of the user’s inputs in the AccountBuilder‘s instance. When the Build() is called, I transfer all the values to the Account instance. BTW, I have used C# 6’s feature of expression-bodied functions.

I find builders very useful, especially when my domain is complex and need to control how the instance is created. With fluent interfacing and enforcing domain rules, it becomes easy for developers to create instances of your product. I find implementation of Builder this way more expressive and with more control.