Thursday, September 1, 2011

Entity Framework Validation Bug (DBEntityValidationException)

The Problem

You can download the source code for this project from http://tinyurl.com/3ep48o5. The project is just a simple unit test project that shows the bug and workaround options.  You need VS2010 and a SQLEXPRESS or SQLSERVER instance to run the sample.  If you are using the full blown SQLSERVER, then you will need to uncomment the connection string from the app.config file.  The database will automatically be generated when you run the tests.
I have had the opportunity to play around with the EF 4.1 code first constructs that exist. I am really enjoying the development experience, albeit my use case is relatively simple at the moment.  Anyhow, one of the features I really like is the ability to place navigation properties in model objects without the associated foreign key id in the code.  The BadParentEntity model shows an example of this (see below).
public class BadParentEntity { public int Id { get; set; } public string Description { get; set; } #region navigation properties [Required] public virtual ChildEntity Child1 { get; set; } [Required] public virtual ChildEntity Child2 { get; set; } #endregion }

Notice there are two ChildEntity navigation properties while there are no foreign key properties to these child entities. The EF code first conventions infer that a column Child1_Id and Child2_Id must exist on the BadParentEntity table and they must refer to the Id column on the ChildEntity table. I think this is great, it allows me to remove the id based relationships and infer them based on the complex model mappings between types. Unfortunately, I recently ran into an issue that required me to place the foreign key relationships back in. I wanted to share the solution I came up with for it. The test method below, exhibits the problem. This test ends up throwing a DbEntityValidationException with two validation error messages of (“The Child1 field is required”, and “The Child2 field is required”)… weird… they are there in the database.

[TestMethod] [ExpectedException(typeof(DbEntityValidationException))]public void ExhibitBugTest() { using (var repos = new TestBugRepository()) { var parent = repos.BadParentEntities.Where((x) => x.Id == 1).Single(); parent.Description = "THIS WILL FAIL!"; try { repos.SaveChanges(); } // EXCEPTION!!!! WHAT catch (DbEntityValidationException dbEx) { foreach (var validationErrors in dbEx.EntityValidationErrors) { foreach (var validationError in validationErrors.ValidationErrors) { Debug.WriteLine(validationError.ErrorMessage); } } // rethrow, we want to fail this test if this happens throw; } } }

The parent I selected from the repository exists via some pre-seeded data in the database, and the this parent has a child defined in both collections. I would expect this code to work because all I am doing is updating a description column. It seems as though the EF thinks that the Child foreign keys are not present because they were never de-referenced and they are lazy. I have come up with a handful of ways to work around it, but most of them leaked ORM details out into the code (and I don’t like that). So below are the workaround I came up with and the final one I settled on.

Workarounds
The first option I came up with is to turn off entity validation when saving changes through the repository. I don’t recommend this option because the advantages of model verification automatically happening are highly beneficial and some of the other workaround options are a little less drastic. In order to do this you can do the following. With the below code I no longer receive an exception when saving my entity modifications.


DON’T DO THIS

[TestMethod]public void WorkaroundValidateOnSaveDisabled() { // open a new connection and get the parent. update a single column and save using (var repos = new TestBugRepository()) { repos.Configuration.ValidateOnSaveEnabled = false; var parent = repos.BadParentEntities.Where((x) => x.Id == 1).Single(); parent.Description = "Updated!"; repos.SaveChanges(); } }


The next option has to do with forcing the lazy evaluation of the child properties. Again, I don’t recommend this solution because it leaks usage requirements too much onto the user’s of the model objects, the code for that looks like… as you can see a reference to each child is saved off in a locally scoped variable.

DON’T DO THIS

[TestMethod]public void WorkaroundOption1() { using (var repos = new TestBugRepository()) { var parent = repos.BadParentEntities.Where((x) => x.Id == 1).Single(); parent.Description = "Updated From Workaround!"; // just dereference the children to workaround var child1 = parent.Child1; var child2 = parent.Child2; // NO EXCEPTION repos.SaveChanges(); } }

The last option involve updating the model to be defined in a manner that the entity framework no longer throws an error. The options here it to just remove the [Required] attribute from the navigation properties altogether. EF will no longer verify their presence, but those attributes were probably there for a reason and we don’t want to lose that data integrity.  Unfortunately I chose this option.

Unofortunately, do this

public class NotRequiredWorkaroundParent { public int Id { get; set; } public string Description { get; set; } public virtual ChildEntity Child1 { get; set; } public virtual ChildEntity Child2 { get; set; } }

Tuesday, August 30, 2011

ASP.NET MVC Custom Unobtrusive Validation

A recent project I am working on required some validation that was not supported out of the box by any of the ASP.NET MVC 3.0 validation attributes. I also wanted to support client side validation which quickly got me into the world of JQuery unobtrusive validation. Bear with me as the first part of this blog has nothing to do with ASP.NET MVC, but I think it is valuable to know what is actually happening under the covers. In the end we are going to create an extremely useful validator that verifies a value equals a static number. Yes, not that useful, but this simplicity allows for greater concentration the technology that the logic of the code.  Find the source code for the project @ http://tinyurl.com/3d6vn7w.

JQuery unobtrusive validation is convention based attributes decorations on user control elements that are later processed by JQuery. The below code snippet shows the html that is output by the simple validation sample we will be using in the project. There are three elements in this html, a submit button, a span that is placeholder for the validation error message, and an input that has a bunch of data-val-* attributes. The data-val-* attributes are what the JQuery unobtrusive validation interrogates and uses to infer client side validation behavior for controls. This mechanism provides the benefit of not embedding javascript in your page, but also allowing the browser to continue to function if javascript is not enabled. By default there are a number of data-val-* constructs that are supported. For instance, support for data-val-required is provided in jquery.validate.unobtrusive.js.
<form action="/" method="post"> <div> <input class="text-box single-line" data-val="true" data-val-number="The field FavoriteNumber must be a number." data-val-onenum="That&amp;#39;s not your favorite number!" data-val-onenum-thenum="7" data-val-required="The FavoriteNumber field is required." id="FavoriteNumber" name="FavoriteNumber" type="text" value="" /> <span class="field-validation-valid" data-valmsg-for="FavoriteNumber" data-valmsg-replace="true"/> </div> <input type="submit" value="submit!" /> </form>
How does it do this!? Well under the covers once the document has loaded JQuery validation will parse the DOM and create a key value mapping to functions that know how to perform the desired validations. It also attaches some event notifications to realize when focus has left a control (actually configurable, but outside the scope of this entry). What we are going to do in this sample is write a javascript function that performs validation, and create the code that bridges the javascript to our pretty ASP.NET MVC code. So, there are situations though where you would like to add support for a validator that does not ship out of the box. This is done through some key/value lookup that JQuery unobtrusive validation uses under the covers. The token after the attribute data-val-[token] is used as a lookup into an adapter’s collection. The snippet below contains the javascript that must be included and referenced by the project to get unobtrusive validation.

<script type="text/javascript"> // the validation method that will be executed when // javascript is enabled and the element has a decorated // attribute data-val-onenumval $.validator.addMethod("sample_onenumval", function(value, element, theNumber) { if (value == null || value.length == 0) { return true; } if (theNumber == value) { return true; } else { return false; } }); // 1. the first param is the name of the data-val // attribute. used to find controls that // require validation attribute would be // data-val-simple_validation // 2. The second param is the name of the param // attribute for simple_validation. found by // attribute matching a name of // data-val-onenum-thenum // 3. Parameter 3 is optional, and is the name of // the validation method. I intentionally gave // a different name here, to just point out the // name can be different. If value is left off // then the rule name is used to lookup the // validation method $.validator.unobtrusive.adapters.addSingleVal('onenum', 'thenum', 'sample_onenumval'); </script>

The line above that invokes addMethod, contains a lookup key that is used to find the anonymous method that is specified in parameter 2. The validation function will return true if the validation succeeds, or false if validation fails. You can see that the anonymous function takes three arguments…. More on that in a bit. Moving a bit further down you will find a line that adds an entry to an adapters collection. This call tells jquery validation a couple things. First, it states what the lookup key is for data-val-onenum attributes (in this case sample_onenumval). It also tells the adapter what parameter value should be passed to the validation method, in. his case data-val-onenum-thenum. But how in the world!!! (At least that is what I said). It turns out that “adapters” are actually an abstract concept, that you can provide your own implementation for. The addSingleVal adapter provides logic for finding the parameter value and knows how to call the validation function providing said parameter value. When the time codes to validate, JQuery validation defers to the adapter which then interrogates the html to properly build up the function invocation. JQuery unobtrusive validation ships with a default set of adapters that we can use thankfully, so you only need to write one if your needs are not met. I came across an article from Brad Wilson’s blog and he has defined them all extremely well http://bradwilson.typepad.com/blog/2010/10/mvc3-unobtrusive-validation.html. Now onto some C# source code!

public class OneNumberValidationAttribute : ValidationAttribute, IClientValidatable { private int _theNumber; public override bool IsValid(object value) {   var isValid = true; if (value != null) { try { var converted = Convert.ToInt32(value); isValid = (converted == _theNumber); } catch (InvalidCastException) { } } return isValid; } public OneNumberValidationAttribute(int theNumber) { _theNumber = theNumber; } public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context) { yield return new OneNumberValidationRule(ErrorMessage, _theNumber); } } public class OneNumberValidationRule : ModelClientValidationRule { private static string _thenumValidationType = "onenum"; private static string _thenumValidationParameterKey = "thenum"; public OneNumberValidationRule(string errorMessage, int theNumber) { base.ErrorMessage = errorMessage; base.ValidationType = _thenumValidationType; base.ValidationParameters.Add(_thenumValidationParameterKey, theNumber); } }

The code above is the definition of the server side validation attribute (specified by implementing the ValidationAttribute class). You will also notice though that the class also implements something called IClientValidatable, which forces the contract of a method class GetClientValidationRules (http://msdn.microsoft.com/en-us/library/system.web.mvc.modelclientvalidationrule.aspx). These rules are serialized as JSON to the client and are what create the unique data-val attribute names that end up decorating the elements. In the code above you will notice that we return a single OneNumberValidationRule that is composed of an error message, a validation type, and a single validation parameter. Also notice that the type matches the name of our validation adapter specified in the javascript. The validation type is what must match the name of the adapter that processes the validation type. In addition, the including of a validation parameter will also add an attribute of data-val-[type]-[paramx] to the element. The attribute decorated view model is below, along with the source code for the project (Source: http://tinyurl.com/3d6vn7w)

public class IndexViewModel { [OneNumberValidationAttribute(7, ErrorMessage="That's not your favorite number!")] public int FavoriteNumber { get; set; } }
Side Node
As you look at the project you may notice I used the concept of a ViewModel instead of a model for the sample. For all but smaller projects I advocate for following an approach of a single ViewModel to View concept. With all the declarative programming that is supported out of the box by ASP.NET MVC it feels funny to decorate a model object with something that has direct correlation with the UI. I realize this can create more mapping logic in your code, but I believe it creates a cleaner business layer and allows people working on the backend to not worry that they directly affecting the UI.


Friday, August 12, 2011

Smart Card Framework – Where’s my card at? – Part 2

If you have not read part one of this series, we are trying to build a framework for interacting with smart cards and readers on a windows based PC using the .NET framework. For more information on what the goals are for this framework please see Part 1. In this part we are going to concentrate on the composition model of detecting what smart card readers exist on our PC and notifications when a card is inserted or removed from a reader. I recommend you first download the SmartCard code base for this project from http://smartcardframework.codeplex.com. The project may seem large initially, but most of the code currently is stubbed for future implementation. The best place to start is with execution of the unit tests that exist in the SmartCard.Framework.UnitTests project.  As long as you have at least 1 PC/SC Complient smart card connected to your PC then all unit tests should pass. 
clip_image002As described in part 1, we want to support multiple smart card API’s for communication with various readers. In order to prove this functionality out I have stubbed two readers in our application. The first reader is based on PC/SC and the other reader is a reader based on the windows file system. The project structure for these two readers is displayed on the right. As we continue development of the API we will always ensure that both of these API’s are in use. The purpose of the file system will serve as the baseline for testing components where physical interaction with a reader would be required. Now onto the unit tests!
The way I have decided to support multiple card readers is through a couple design and technology decisions. MEF will be used for plug-in composition, and a centralized management class will be used to provide a single communication for reader notifications. The Microsoft Extensibility Framework (MEF) is a new to Silverlight and .NET 4.0. It provides a standardized plugin and discovery approach to disparate components across a system. If you are interested in learning more see the MEF CodePlex site http://mef.codeplex.com/. The code snippet below is from the DefaultFrameworkCardReaderManager and shows how we will aggregate and find a set of readers using MEF.
1 /// <summary> 2 /// reader discovery modules 3 /// </summary> 4 [ImportMany(AllowRecomposition = false)] 5 private IEnumerable<ISmartCardReaderDiscovery> _readerDiscoveryModules = null; 6 7 … construction code here, that calls compose, and find reader 8 9 private void Compose() 10 { 11 // TODO: use a different catalog to acquire data. and we also probably 12 // use a composition model that allows for multiple catalogs 13 // ... save that for later though. 14 AssemblyCatalog catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly()); 15 var container = new CompositionContainer(catalog); 16 container.ComposeParts(this); 17 }
If you look in the source code, there is nothing that actually “news up” the _readerDiscoveryModules field. This field is actually populated by MEF during its composition process. The ISmartCardReaderDiscovery interface is a contract that must be implemented by developers who create new API abstractions for our framework. The interface exposes a single method Discover(), that is responsible for returning a list of smart card readers it knows about. This model allows for a loose coupling between how the management class receives a set of concrete ISmartCardReaderDiscovery objects. The below code snippet displays how the interface is used later in the management class to actually receive the list of readers that will be managed.
1 private List<IsmartCardReader> FindReadersToManage() 2 { 3 var readersToManage = new List<ISmartCardReader>(); 4 foreach (var discoveryProcess in _readerDiscoveryModules) 5 { 6 var discoveredReaders = discoveryProcess.Discover(); 7 readersToManage.AddRange(discoveredReaders); 8 } 9 10 return readersToManage; 11 } 12 13 private void AttachToReaderNotifications() 14 { 15 foreach (var managedReader in _managedReaders) 16 { 17 managedReader.CardInserted += managedReader_CardInserted; 18 managedReader.CardRemoved += managedReader_CardRemoved; 19 managedReader.ReaderError += managedReader_ReaderError; 20 } 21 }
So, now that we have the manager in place and we have created the contract for discovery of new readers, let’s go ahead and discuss/create the PC/SC implementation. For the purpose of this article we will cover only the topics of detecting an approach to card insertion, card removal, and reader detection using the PC/SC API. The next post will cover how we communicate with the card in a reader.
PC/SC is comprised of numerous functions, but in order to detect card insertion and the set of readers that exist on the system there are only a couple API calls of importance.

  • SCardListReaders – This API call provides a listing of a set of PC/SC compliant smart card readers by name. This is what our PC/SC discovery process uses to find all reader in the system.
  • SCardGetStatusChange – This API call provides the current state of a set of readers, and we use a polling mechanism in our code track potential changes in reader state.
The reader discovery process for PC/SC is less than 100 lines in total (including comments). The manager class will find this implementation using MEF and will call discove adding the discovered readers to the managed collection.
1 internal class PcscSmartCardReaderDiscoveryProcess : ISmartCardReaderDiscovery 2 { 3 public IEnumerable<ISmartCardReader> Discover() 4 { 5 var readerNames = GetInstalledReaders(); 6 var readers = new LinkedList<ISmartCardReader>(); 7 foreach (var readerName in readerNames) 8 { readers.AddLast(CreateReader(readerName)); } 9 return readers; 10 } 11 12 private PcscSmartCardReader CreateReader(string readerName) 13 { 14 // TODO: revisit how we will determine 15 // the reader type. (name inference, injection, etc) 16 return new PcscSmartCardReader(readerName, eReaderType.Unknown); 17 } 18 19 /// <summary> 20 /// Returns the string names of all of the 21 /// smart card readers currently available on the system. 22 /// </summary> 23 /// <returns></returns> 24 private IEnumerable<string> GetInstalledReaders() 25 { 26 // get the length of buffer needed for all string names 27 UInt32 bufferLength = 0; 28 int retVal = 0; 29 30 // get the buffer size required for getting all readers. 31 retVal = PcscInvokes.SCardListReaders(0, IntPtr.Zero, null, ref bufferLength); 32 if ((ePcscErrors)retVal != ePcscErrors.Success) 33 { throw new PcscException("", (ePcscErrors)retVal); } 34 35 // now load the buffer up 36 byte[] mszReaders = new byte[bufferLength]; 37 retVal = PcscInvokes.SCardListReaders(0, IntPtr.Zero, mszReaders, ref bufferLength); 38 if ((ePcscErrors)retVal != ePcscErrors.Success) 39 { throw new PcscException("", (ePcscErrors)retVal); } 40 41 ASCIIEncoding encoding = new ASCIIEncoding(); 42 string encoded = encoding.GetString(mszReaders); 43 string[] split = encoded.Split('\0'); 44 return split.Where((x) => !string.IsNullOrWhiteSpace(x)); 45 } 46 }
There are a couple things I should point out. In the CreateReader method, there is a TODO comment around determining if a reader is contact or contactless. At this point we are just setting this flag to unknown. The PC/SC API does not give us any indication of whether a reader is contact or contactless, so we will have to come up with a technique to determine what type of reader we are dealing with. Also, at the bottom of the GetInstalledReaders call you will notice some string tokclip_image004enization around a byte array that was converted to ASCII. The SCardListReaders function returns a multi-string value, and there is no corresponding type in .NET. Side note… This code will break if the multi-string is a Unicode value. It is probably best that this code be pulled out into a MultiString class. Movin on… the screenshot to the right shows the execution of a unit test that exercises the discovery process. You can see from the results that my laptop has a single “Broadcom Corp Contacted Smartcard 0” reader that supports the PC/SC API.
Notice the PcscSmartCardReaderDiscoveryProcess class is declared as internal, yet it can still be referenced in the unit tests? Add the below line to a AssemblyInfo.cs file and you can keep your assembly external interface clean while still providing tests for some internal code. 

[assembly: InternalsVisibleTo(“SmartCard.Framework.UnitTests”)] in your assembly info file then the assembly referenced will have access to assembly internals.
At this point this feels like an awfully large amount of code to just find out if a card is inserted (about 1000 lines), but this is code that only need be written once and will allow for consistent interaction with smart cards as new readers and API’s are introduced.  I wanted to test the card insertion logic using a real smart card (I was going to use my SIM card from my phone, but I need to get a PC/SC reader that can take this device). So, it turns out I need to go do some purchasing, I found a reader that will work great and have it on order. Best deal I could find for a PC/SC compliant reader was http://www.amazon.com/gp/product/B0045BIUGG. Hope it gets here sometime next week. When it gets here I will throw up some quick info on what is in my card and we will start designing how card features are integrated into the framework, Goal is to have an example of retrieving contacts from the SIM card through he API next week.  We will try and concentrate on how we expose features that a card supports to application developers interacting with the cards.

Tuesday, August 9, 2011

Smart Card Framework – Part 1

Summary
Smart Cards are becoming more prevalent in every day life. The government has put mandates on their inter-government issued credentials for them to support the PIV specification (http://csrc.nist.gov/publications/PubsSPs.html). In addition the SIM cards in our phones are smart cards, and smart cards are used heavily in European countries. US Passports have a contactless chip in them, and I assume that as time progresses our identification cards will start to have contactless and contact chips embedded in them.

Interacting with smart card devices in a windows environment is not one of the most intuitive things I have done.  It is primarily performed through the PC/SC API’s which provide some standard functions for communicating with smart card readers.  See http://www.gemalto.com/techno/pcsc/ for more information on PC/SC specification.  PC/SC provides a nice abstraction from the IO communication, but it is still highly reliant on the APDU (Application Protocol Data Unit) communication required to perform operations on the card.  I wanted to start writing a series on a framework to simplify the way we communicate with these cards using the Microsoft .NET framework (in this case 4.0).  I don’t know how many parts this series will be, but by the end of the series my goal is to have a extensible smart card framework that can support smart cards you may be using today.

For more information on an APDU see the WIKIPEDIA entry at http://en.wikipedia.org/wiki/Smart_card_application_protocol_data_unit.  We will be diving into what at APDU is when we start designing and developing the communication layers of the framework.  Much of this process will be a learning experience for me as well and we get further into the design.

Lets start
I have written abstractions in front of the windows PC/SC smart card API’s before, but there were many things learned as the API was developed.  I am taking this blog series as an opportunity to start from the ground up and apply lessons learned.  As with many projects I start, I usually like to establish a set of design goals.  With my previous knowledge of smart card development my list of goals for this framework is below.

  1. Full abstraction from card communication protocol as a client application.  If I am using the API to talk to a defined card type, I do not want to know APDU’s are behind the scene.
  2. Type safe communication with a card as a client
  3. Optional verbose logging of all APDU messages behind the scene
  4. Auto-detection when a card is inserted
  5. Automatic discovery of inserted card type
  6. Pluggable support for new card types and features (no need to recompile core code)… maybe use MEF
  7. Unit testability at numerous layers of the framework.  for instance the ability to test without a physical card.  We will get into the framework model in a future series.
  8. The ability to support multiple card readers simultaneously that are using different driver API’s.  For instance support for PC/SC compliant readers and MCP readers from MAGTEK.
  9. Assume support from .NET 3.5 and up.  There may be a desire to support windows mobile environments using .NET CF 3.5
I also have a rough idea for what I would like the end users usage experience to be like; an example is below.

1 // prior to working with the cards, we have a management class 2 // that controls all interaction with the readers 3 ISmartCardReaderManager manager = new AutoDetectionReaderManager(); 4 manager.CardInserted += CardInserted; // a card was inserted 5 manager.CardRemoved += CardRemoved; // a card was removed 6 manager.ReaderError += ReaderError; // error interacting with reader(s) 7 8 // example of the callback method that would handle reader messages 9 public void CardInserted(object sender, CardInsertedEventArgs eventArgs) 10 { 11 // we can access both the card and the reader from the args 12 var card = eventArgs.InsertedCard; 13 var reader = card.Reader; 14 15 // there is a feature collection in the card that we can query 16 // to interact with the card 17 foreach (var feature in card.Features) 18 { 19 Console.WriteLine(feature.GetType().FullName); 20 } 21 22 // we could fetch a feature from the card to use 23 IAddressBookFeature feature = card.Features.TryGet<IAddressBookFeature>(); 24 if (feature != null) 25 { 26 // we could then get the list of contacts for instance 27 // feature.Contacts... 28 } 29 30 // reader.Type is a property that would contain Contact/Contactless/Unknown 31 32 // we can also disconnect from the card, and the interface implements 33 // the disposable pattern at this time 34 card.Disconnect(); 35 card.Dispose(); 36 }
I think this is a good spot to stop this entry. I expect to have part 2 in place in the next couple of days. In part 2, I will begin to cover the various layers of abstraction and encapsulation that will be provided through the API to allow for un to interact with a smart card in a way similar to below. I will probably also start a skeleton project and put it up on CodePlex for viewing.

Addition Information
Microsoft currently provides an abstraction to handle how different cards implement similar functionality, for instance verifying a pin code.  This API requires that card vendors also implement a interface specification defined by Microsoft.  The desire of the framework we are designing is not to redo this Microsoft API functionality but to supplement it.  It is possible that the framework we develop will use this API behind the scenes for some implementation, but since we can not assume a card vendor will always implement the Smart Card Minidriver Specification we must supplement the existing base.  For instance, the mini-driver specification also does not provide a façade for features some cards have such as fingerprint verification, and personal identity data retrieval.  For more information on the Microsoft Smart Care API see http://msdn.microsoft.com/en-us/library/dd627645(v=vs.85).aspx.

There is also a company CardWerk’s that has implemented a supported framework for smart card development in the .NET framework.  You can find a link to their website at http://smartcard-api.com/professional.shtml.  Although I have not used the API, it seems to share some similarities to what we will be designing/developing through this series.

Sunday, March 20, 2011

Windows Live Mesh - NAS Synchronization

Summary
I use Windows live Photo Gallery (it's awesome) to manage my photos. I recently bought a NAS device to store all of my media and found out that Photo Gallery is now substantially slower at detecting newly added images and recognizing faces. My guess is it has something to do with the overhead in pulling images off the network. As a solution I decided I would download Windows Live Mesh (WLM) and use the folder synchronization to keep local copy of the files on my box. It turns out that WLM does not support synchronization of folders on a NAS. Luckily, the Microsoft Sync Framework guys have made it really simple to roll out you own solution.

Details
The Microsoft Sync Framework defines a basic set of functionality that must to be provided in order to synchronize information from different data sources (AKA replicas). head over to http://msdn.microsoft.com/en-us/sync/default.aspx to get some more information. There documentation is improving... at least compared to when i looked at it a year ago.

By default the sync framework comes with a handful of replica providers, one being a file synchronization provider. below is a snippet of code from the application I am writing that performs the synchronization to my NAS.
1:  using (var sourceProvider = new FileSyncProvider(SourceDirectory.FullName, SyncFilter, SyncOptions))
2:  using (var destinationProvider = new FileSyncProvider(DestinationDirectory.FullName, SyncFilter, SyncOptions))
3:  {
4:     SyncOrchestrator agent = new SyncOrchestrator();
5:     agent.LocalProvider = sourceProvider;
6:     agent.RemoteProvider = destinationProvider;
7:     agent.Direction = SyncDirectionOrder.Upload;
8:     sourceProvider.DetectChanges();
9:     destinationProvider.DetectChanges();
10:    agent.Synchronize();
11:  }
Although there is a lot of context missing around this code, this is the core of what gets a synchronization session done. In order to perform synchronization between two replicas you must have a source provider and a destination provider (lines 1 and 2). For file synchronization, the sync framework 4.0 provides the FileSyncProvider class. On line 4 a sync orchestrator is constructed. This object controls the interaction and flow of communication between the two providers. Line 8 and 9 are optional depending on the sync options provided in the provider, but essentially this tells the provider to make sure the replicas they represent are up to date with their knowledge (what files were changed, when, etc). There is a bit of overhead for file system providers so I decided to manually execute this. In the actual code implementation, there is some progress notifications associated with detecting changes. Line 10 tells the agent to start the synchronization. I was pretty impressed with how simple this was to setup. I put my current project up on codeplex if you want to take a look, or even better let me know if you want to contribution rights. I am going to add a couple final features, but it is mostly meeting my basic requirements.

Code was a quick write, let me know if you are interested in something and I will try to add it. I hope to have enough time to actually make this thing more usable over the next month. Currently I just have it setup as a windows scheduled task to perform my sync nightly… or I can always manually run it. download and compile from http://foldersyncer.codeplex.com/

Hopefully the Mesh team enables NAS sync as I am positive it will go through a whole host of testing that I did not do.

Thursday, March 3, 2011

SmartCard – PC/SC SCardConnect Sharing Violation

Summary:

Windows 7 connects a smart card with exclusive rights, causing SCardConnect PC/SC calls to fail with shared or exclusive rights. After about 10 seconds, the exclusive connection is released and SCardConnect succeeds. To fix, just disable a bunch of policy settings (read the details) :).

The Details:
I have been working on an abstraction layer for our smart card communication here at Eid for a while. The code allows us to easily swap out readers without any concern for how communication with the reader occurs. It also provides a very simple and clean interface for querying a card for capabilities and features.

We just switched from a reader that uses a proprietary Magtek MCP API to the more standard PC/SC API. Although we already had support for PC/SC communication, Today we switched back to using a reader based on PC/SC and all of a sudden our application was failing to connect to our Smart Card on windows 7 boxes. Previously all testing had been done in a Windows XP environment. The behavior we were seeing was that the SCardConnect API call was failing with a sharing violation for about 10 seconds after a card was inserted. After the 10 seconds passed, we would suddenly be able to successfully connect to the card.

It turns out there are a number of Local Group Policy Settings that control the behavior windows will take when a card is inserted (particularly a card that has a WHQL certified driver). I have pasted a screenshot of the settings below that control the behavior. you can run gpedit.msc to get this mmc terminal up. For more info on these settings, visit the smart card help pages on msdn @ http://technet.microsoft.com/en-us/library/ff404287(WS.10).aspx

image

Saturday, February 26, 2011

First entry

My name is Chris Evans. I am currently a Sr. Lead developer at EID Passport Inc. I will be coming up on 5 years at Eid in the near future. We primarily deal in .NET technologies and integrating with existing solutions. Check out our webpage at www.eidpassport.com for more information.

These posts are generally for my own sake so I have some forum for retrieving past issues I have encountered in solve, but I hope upcoming entries help some people solve their problems a little faster.