DDD Bounded contexts

Feb 26, 2015 at 10:17 PM
Edited Feb 26, 2015 at 10:30 PM

Using DDD with bounded contexts, I will have multiple DbContexts. It is possible to have the same database table modelled by two different entities (each within its own bounded context/DbContext).

Is it possible for EFCache to invalidate cache over both DbContexts when I manipulate a tables data in one or the other bounded context?

I have created a small test project based on the Airline example in the EFCache blog. If the entities in my bounded context share the type name (i.e. Airline and Aircraft) then the cache is invalidated for both contexts. If I rename the entities in one context, then the cache isn't invalidated.

My guess is that the data is being cached against entity type name. My policy is setup to select caching based on table name, is there any configuration I can add to cache against table name (for example).
Mar 4, 2015 at 3:10 PM
Edited Mar 4, 2015 at 6:29 PM
Shouldn't you have a separate cache instance for each context to avoid conflicts? I guess it would only be workaround that is feasible if you are using a simple caching mechanism like InMemoryCache.
On the other hand if you have two entity sets pointing to the same table and this table is modified isn't it correct to invalidate all the results for these entity sets regardless of the context the entity sets belong to?
Mar 5, 2015 at 2:20 PM
Edited Mar 5, 2015 at 2:21 PM
So if I have two separate libraries, each containing a separate bounded context (BC), should I use a single InMemory cache instance, or one for each BC?

The behaviour I want, is for BC1 to update a table, which invalidate both BC1 and BC2 caches where that table is used.

So say in BC1 I have table "TblStudents" mapped to a DbSet called "Students" and in BC2 I have TblStudents mapped to a DbSet called "People" (a smaller subset of columns than "Students"). Can I setup EFCache so that cached data for both BC1::Students and BC2::People is invalidated when either is updated?
Mar 5, 2015 at 3:14 PM
Sorry, I was thinking loud. I think using a single cache is the right thing to do. This because you're ultimately working with the same table and as a result it doesn't matter who changed the data. What matters is that the data was changed and cached results are no longer valid and therefore the cache has to be invalidated.

DbSets you are using are from so called conceptual model while cache work on entity sets from store model. Code First does not let you rename these entity sets (I think you could do that with edmx) so I think cache should be invalidated regardless of which entity set you modify.
Mar 13, 2015 at 9:15 PM
Edited Mar 19, 2015 at 9:17 PM
Thanks for you help. I have downloaded the source and kinda worked out how the caching is handled.
Items are cached against the DbSet name, so in my example above two separate contexts referencing the same table with different DbSet names will cache independently of one another.

In my local source, I have extended the CachePolicy object to allow the application to override the selection of the unique string that is used to cache against. So in my extended CachePolicy I can return the table name instead of the entity set name:

The new functions default implementation in CachePolicy is
protected internal virtual Func<EntitySetBase, string> CacheCategory(){
    return entitySetBase => entitySetBase.Name;
And where
_commandTreeFacts.AffectedEntitySets.Select(s => s.Name)
was used in CachingCommand, it's now become
The ICache implementation now doesn't cache against an entity set, but rather a cache category.

I think this is possibly a worthy inclusion for everyone. What do you think?
Any feedback on this change would be eagerly received, to make sure I have understood the caching correctly.
Mar 22, 2015 at 9:48 PM
Hi moozzyk, any thoughts on my suggestion to allow a custom policy to override the cache key?
Mar 23, 2015 at 6:08 AM
I have been quite busy at work recently and have not had too much time to look into it.

I spent some time today thinking a bit about this and I don't see why the user even needs to be able to define the key. Since for the user the cache is kind of a black box and the algorithm for generating keys is just an implementation detail isn't it just a bug that results retrieved from the same table are treated differently because the entity type is different? (if the entity type used in both contexts was the same it would have worked as expected). In that case, wouldn't it be sufficient just to fix the code so that it uses s => s.Schema + (s.Table ?? s.Name) as the key to fix the issue? (AFAIR in non-CodeFirst world the Table property is optional in which case the entity set name is used as a table name).

In general I am not in favor of allowing users to control random aspects of implementation they don't need to be even be aware of because they may not be able to code it correctly without really understanding where and how it is used. Another reason is that it may make it more difficult to change the code in the future without breaking people.

Mar 23, 2015 at 9:30 AM
Edited Mar 23, 2015 at 9:31 AM
Thanks for your quick response; sorry to have rushed you.

A bug fix would suit me just fine.

I don' think selection of the cache key can be considered black box when you are openly suggesting people implement their own caching mechanisms. These people would need to understand the selection of the cache key. Furthermore, there maybe some edge cases where the key needs to be user defined; take the edge case where a user updates a table, and needs to invalidate cache for a view using the same table. Having control over the cache key would mean a dev could categorise this table and view under the same key, so updating the table will invalidate both caches (I appreciate that this would mean db schema details leaking into code). Other cases where people may need to categorise multiple tables under one key include triggers updating data & cascading deletes.