Windows Mobile Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, 30 October 2013

Events, Delegates, Actions, Tasks with Mutex

Posted on 13:25 by Unknown
Mutex. If you are a developer than you heard about mutex. Mutex are used to ensure that only one thread/process has an exclusive access to a specific sections of the code.
Let’s look over the following code and see if the code is correct or not?
public class Foo
{
Mutex mutex = new Mutex();
event SomeActionEvent someActionEvent;

public void Do()
{
mutex.WaitOne();
...
someActionEvent += OnSomeAction();
}

public void OnSomeAction(...)
{
mutex.ReleaseMutex();
}
}
“WaitOne” method is used to block the current thread until we received a release signal – WaitHandler. “ReleaseMutex” is used when we want to release the lock.
The above code release the mutex lock on an event handler that is executed on a different thread. The problem is that we don’t call the release mutex method from the same thread from where we call “WaitOne”. Because of this, the obtained behavior is not the one that we expect.
The same problem will appear if we use mutex in combination with delegates, lambda expression, Action/Func.
public class Foo
{
Mutex mutex = new Mutex();

public void Do()
{
mutex.WaitOne();
...
MyMethod(()=>
{
mutex.ReleaseMutex();
});
}
}
This is not all. We will find this problem when we are using Task. This is happening because each task is/can be executed on a differed thread.
public class Foo
{
Mutex mutex = new Mutex();

public void Do()
{
mutex.WaitOne();
...
Task.Factory.StartNew(()=>
{
...
mutex.ReleaseMutex();
});

}
}
There are different solutions for this problem from mutex with name to other ways of synchronization. What we should remember about anonymous mutex is that we need to call the release function from the same thread that made the lock.
Read More
Posted in | No comments

Tuesday, 29 October 2013

Digging through SignalR - Commands

Posted on 00:41 by Unknown
Looking over source code of SignalR. I found some interesting class and ways to implement different behaviors. In the next series of post I will share with you what I found interesting.
Before starting, you should know that SignalR is an open source project that can be accessed using GitHub.
In today post we will talk about command pattern. This patterns over the ability to define a “macro”/command that can be executed without knowing the caller. Commands can be handle in different ways, from create a queue of them to combining or offering support for redo/undo.
In SignalR library I found an implementation of command pattern that caught my attention.
internal interface ICommand
{
string DisplayName { get; }
string Help { get; }
string[] Names { get; }
void Execute(string[] args);
}
internal abstract class Command : ICommand
{
public Command(Action<string> info, Action<string> success, Action<string> warning, Action<string> error)
{
Info = info;
Success = success;
Warning = warning;
Error = error;
}

public abstract string DisplayName { get; }

public abstract string Help { get; }

public abstract string[] Names { get; }

public abstract void Execute(string[] args);

protected Action<string> Info { get; private set; }

protected Action<string> Success { get; private set; }

protected Action<string> Warning { get; private set; }

[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode", Justification = "May be used in future derivations.")]
protected Action<string> Error { get; private set; }
}
Why? The way how the handlers for actions like success, warning, info and error are transmitted. When creating the command, you need to specify them through the construct. In this way the developer will be forced to specify them. I think that this a great and simple way to specify them. If a developer don’t want to handle this actions, that he can transmit a null value for them. This solution is better than having one or more events.
Maybe it would be pretty interesting to wrap this 4 parameters in a simple class. In this way you could have all the similar actions under the same object. Beside this we would reduce the numbers of parameters of the Command class with 3.
internal class CommandCallbackActions
{
public CommandCallbackActions(Action<string> info, Action<string> success, Action<string> warning, Action<string> error)
{
Info = info;
Success = success;
Warning = warning;
Error = error;
}

protected Action<string> Info { get; private set; }

protected Action<string> Success { get; private set; }

protected Action<string> Warning { get; private set; }

[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode", Justification = "May be used in future derivations.")]
protected Action<string> Error { get; private set; }
}

internal abstract class Command : ICommand
{
public Command(CommandCallbackActions callbackActions)
{
CallbackActions = callbackActions;
}

public abstract string DisplayName { get; }

public abstract string Help { get; }

public abstract string[] Names { get; }

public abstract void Execute(string[] args);

public CommandCallbackActions CallbackActions { get; set; }
}
Another method that drew my attention was the “Execute” command. The command arguments are send through an array of string. This is a very and simple and robust way to send parameters. If this is enough for your application, than you should not change this to something more complicated. Otherwise you can replace the array of arguments with an interface (“ICommandArgs”). Each custom command can have his implementation of this interface. You should use this only if you really need, otherwise you will only make the project more complicated.

Read More
Posted in digging, signalR | No comments

Wednesday, 23 October 2013

Story - Developers that don’t understand the business of the application

Posted on 00:21 by Unknown
This days I participate to a presentation of a product. The idea of the product was great, I think that they will have a lot of success in the future.
Being a technical presentation, the presentation was held by one of the developers of the team. From this point of view this was great, because the audience was developers and other technical person. He could respond to a lot of question related to architecture, scalability, protocol that were used, base concept and so on.

Until….

Someone asked about something related to business idea and what kind of problem this application will solved….

In that moment our developer stopped……..

He could not give an answer….

And…he sad…
I’m a technical person, is not my business to know the business of the application!

In that moment I freeze. How the hell you can develop an application without understand the business? This is one of the biggest mistakes that can be done by a team member – refuse to understand the business of the application. Before thinking to a solution it is a have to understand the business of the application itself.
How you can resolve a problem without understanding the problem itself?
Being a monkey, a monkey with keyboard is great for a monkey. But when you need to solve a problem and give a solution, being a monkey with keyboard is not enough… This is how you end up with an airplane when a customers wanted a car.
Read More
Posted in | No comments

Monday, 21 October 2013

Custom Content Delivery Network Mechanism using Cloud (Azure)

Posted on 08:59 by Unknown
Requirements:
Lets’ imagine an application that need to deliver Excel report in different location of the world. Based on the client origin the application should be able to deliver specific Excel version. The application should be deployed in different location on the world.
The current CDN solution cannot be used because the master node needs to push the content to the slaves (CDN nodes) and the total report size for each slave will be over 20GB.
Non-Cloud solution:
If we would go on a non-cloud solution we should develop an application that is deployed on different location all around the world. Each application should be able to detect the source of the request and provide the specific Excel report. We should also develop a redirecting mechanism/ load balancing solution that is able to redirect the user to a specific node.
Cloud solution:
If we go on a cloud solution, based on Windows Azure we can imagine a master slave solution.
Each slave of our application will have the Excel reports for the country in his region. This slaves will be able to provide reports for the countries in his own region.
Having an application deployed on different data centers give us the possibility to use Traffic Manager. Traffic Manger is an out of the box mechanism that redirect a call to the closest data center where our application is deployed.
We can have our slaves deployed on different data centers around the globe. Each slave will have the reports for the countries that are served by him. When a request is coming from a country that is not served by the specific slave, the request will be redirected to the global slave, which has the reports for all the countries.
This slaves could detect if there to many requests that are coming for a country that is not mapped for their location and trigger an alert of the provisioning action for reports for that country.
Each slave will have an endpoint that will be used to resolve an Excel report request and a storage (blobs) that will be used to store the reports itself. Based on the client attributes, this service will return a URL with a Shared Access Signature (SAS) of a blob storage where the report is stored. Using SAS the access to the content will be controlled.
The solution will contains a master that will manage all the Excel reports from all the nodes. The master will be able to deploy new version of reports, delete the old one and so on from each slave node. Beside this, the master will the one that can trigger the provisioning of a slave with additional countries. The master will contains a storage (blob) with all the reports that exists and are valid and a service that is able to manage and maintain all the slave nodes.
When the provisioning is triggered, the download process to a specific slave should not be done by the master node. Because we can have a lot of slaves, this action consume a lot of resources and can give us a lot of problems. The master node should send a notification to a specific slave that a specific report is available for download/update/delete. In that moment the slave node should receive the notification and trigger the specific action. In this way we are able to move all the load from master to slaves.
The notification mechanism can be done over Service Bus. Each slave node will be represented by a different subscription. When the download/update/delete action is finished, the slave node can send a notification to the master node using a queue or a Service Bus Topic.

Things that I like to this solution:

  • Traffic Manager – Is able to automatically redirect request and when he detect a slave node is down redirect to the next slave
  • SAS – The content of the blob can be shared in a secure manner
  • Slave’s endpoint – If we have slaves that are hit by a lot of clients we can scale up the numbers of instances of that slave without affecting the rest of the slaves
  • Redundancy – When a slave is down all the request will be redirect to the closest slave
  • Report resolver – When a report cannot be resolve by a specific slave, the request is resolve by the global slave that is able not only to log this issues but also he can notify the master node about this incident. In this way the master node can trigger custom action like provisioning
  • Scalability – Each slave can scale independently based on the load
  • Provisioning mechanism – The provisioning is made using the slave processor resource. In this way the master node will not have peaks
  • Service Bus – Notifications from master to slave can be made using Service Bus Topics. In this way we can have one or more slaves register to the same countries
  • Download – The download itself will be made directly from Azure Storage. The load of the slaves itself will be minimal

We could have a similar approach and eliminate the endpoints from slave. Each slave could have only the storage part. This is a good solution when you the number of request that need to be handled is not very high. But when you have hundreds of request per minutes, that a solution like this is more suitable. The slave endpoints can be hosted on small instances.



Read More
Posted in Azure, Cloud, Windows Azure | No comments

CMS application for 3 different mobile platforms – Hybrid Mobile Apps + Mobile Services

Posted on 05:18 by Unknown
This days I attended to a session where an idea of a great mobile application was presented. From the technical perspective, the application was very simple, the content that is deliver to the consumer has a great value.

The proposal of the application was 3 different application for each platform (iOS, Android and Windows Phone). When I asked why it is so important to have 3 different native application for an application that has the main purpose to bring content to the users I found up that they need push notification support and all the rest of the content that will be displayed will be loaded from the server (CMS).
Having 3 different native application means that you will need to develop the same application 3 times for 3 different platform…. 3X maintenance 3X more bugs and so on….
In this moment I have a rendevu. I have the feeling that I already wrote about this.
For this kind of application you develop a great HTML5 application. For push notification support it is very easily to create 3 different native applications that has a WebBrowser controller. In this controller you can display the HTML content of your application. Only custom behaviors, like push notification will need to be develop for the specific platforms.
In this way, the applications will be very simple and the costs of developing and maintenance will be very low.
For push notification I recommend to use Mobile Services for Windows Azure. It is a great service that can be used to push a notification to all devices and platforms with minimal development costs.

Read More
Posted in Azure, Cloud, Windows Azure | No comments

Tuesday, 15 October 2013

How to track who is accessing your blob content

Posted on 01:24 by Unknown
In this post we’ll talk about how we can monitor our blobs from Windows Azure.
When hosting content on a storage one of the most commune request that is coming from clients is

  • How I can monitor each request that is coming to my storage?
  • For them it is very important to know 
  • Who downloaded the content?
  • When the content was downloaded?
  • How the request ended (with success)?

I saw different solution for this problem. Usually the solutions involve services that are called by the client after the download ends. It is not nothing wrong to have a service for confirmation, but you add another service that you need to maintain. Also it will be pretty hard to identify why a specific device cannot download your content.
In this moment, Windows Azure has a build-in feature that give us the possibility to track all the requests that are made to the storage. In this way you will be able to provide all the information related to the download process.
Monitor
In Windows Azure portal you will need to go to your storage and navigate to the Monitoring section. For blobs you will need to set the monitoring level to Verbose. Having the monitoring level to Verbose all metrics related to your storage will be persisted. The main different between Minimal and Verbose is the level of monitoring.
This data can be persisted from 1 day to 1 year. Based on your needs and how often you collect the data you can set the best value that suites you. If your storage is used very often I recommend to set maximum 7 days. You can define a simple process that extract monitor information from the last 7 days, store it in a different location and analyze it using your own rules. For example you may way want to raise an alert to your admins if a request coming from the same source failed for more than 10 times.
This table contains all the monitoring information for your storage. In this moment we don’t have support to write monitoring data for a specific blob to a specific table, but we can make query over this table and select only the information that we need.
All the information related to this will be persisted in Azure table from your storage named ‘$MetricsCapacityBlob’.
Logging
The feature that we really need is logging. Using logging functionality we will be able to trace all request history. The activation of this feature can be done from the portal, under the logging section. You can activate logging for the main 3 operations that can be made over a blob: Read/Write/Delete.
All this data is stored under the $logs container:
https://<accountname>.blob.core.windows.net/$logs/blob/YYYY/MM/DD/hhm/counter.log
Everything that we can imagine can be found in this table:

  • Successful and failed requests with or without Shared Access Signature
  • Server errors
  • Timeouts errors
  • Authorization, network, throttling errors
  • ...

Each entity from the table contains helpful information like:

  • LogType (write, read, delete)
  • StartTime 
  • EndTime
  • LogVersion (for future, in this moment we have only one version – 1.0)
  • Request URL
  • Client IP
  • …

The most useful information for is usually found under the ‘Client IP’ and ‘Request URL’. Maybe you ask yourself why we have a start time and an end time. This can be very useful for a read request for example. In this way we will be able to know how long the download process took.

I invite you to explore this feature when you need to track the clients that access your blob resources.
Read More
Posted in Azure, Cloud, Windows Azure | No comments

Friday, 11 October 2013

Certificates and resource management

Posted on 20:57 by Unknown
Last month I had two posts where I tacked a little about certificates in .NET (1, 2).
In this post I want to discuss about resource management when X5099Certificate2 is used. This is applicable for X509Certificate also.
When we use a .NET object we usually check if IDisposable interface is implemented. If yes, than we will call ‘Dispose’ method. But unfortunately X509Certificate don’t implement this interface.
Behind this class there is a handle (SafeCertContextHandle) to unmanaged resources. Because of this, each new instance of the certificate will have a handle to unmanaged resources. If you need to process 2.000 certificates from the store it will be very easy for you to get a wonderful “OutOfMemoryException”.
To be able to release all the resources that are used by a reference to a certificate you will need to call the ‘Reset’ method. This method will release all the resources that are associated with the specific certificate.
X509Certificate2 cert = …
…
cert.Reset();
X509Store has the same story and you will need to call ‘Close’ method.
X509Store store = …
…
store.Close();
The ‘Close’ method is pretty simple to be seen when you look over the methods that are available. The name gives you an hint that there are some resources that needs to be released, but the ‘Reset’ method name don’t help you too much with this. Because of this it is very simple to end up with an “OutOfMemoryException”.
I don’t understand why X509Certificate2 doesn’t implement IDisposable interface. The resource-free idiom should be apply also in this case also.

Read More
Posted in Certificate | No comments

Feature of a Smart Devices that I miss

Posted on 04:04 by Unknown
Nowadays, everyone has a smart device like a phone, a tablet, a watch or a camera. This devices give us the power to stay connected with the world in real time and to access or information extremely fast.
All this devices has different security mechanism, which allow us to track them, make them ring and erase all the data.
But once the device is stolen and all the data are deleted, the “new” owner can register and use the device without any kind of problem. From that point, it is pretty complicated to track our device and recover it.
A feature that I miss on this new smart devices is the ability to track and block them after someone reset it. Based on the unique ID of each devices we should be able to track them even if a factory reset is made.
I would like to see a feature for devices that were already register online to request a confirmation from the old owner when the device is register with another account.
Imagine yourself that you buy a device and someone steal it. He will not be able to use it from that moment. The smart device will become only a brick. I don’t have anything against to restrict use of device (internet and online access) after the moment when the device is marked as stolen. Even with a factory reset I would expect this behavior to remain.
In this way, people will not be able to use devices that were purchased from the “black market”.
Read More
Posted in | No comments

Thursday, 3 October 2013

How to assign a private key to a certificate (from command line or code)

Posted on 02:39 by Unknown
In the last post we saw how we can extract certificate from a PKCS7 store. In this post we will see two different ways how we can register a certificate into local store using C#.
The first method that can be used to register a certificate into the store is using X509Store class.
X509Certificate2 cert = new X509Certificate2(...)
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadWrite);
store.Add(cert);
store.Close();
If you want to import the private key to the certificate you will need to need to specify to the certificate constructor the flag that say to the system “Please, persist the private key".
X509Certificate2 cert = new X509Certificate2 (
[path],
[password],
X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet);
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadWrite);
store.Add(cert);
store.Close();
This solution will works in 99.9% cases, but I had the luck to discover that there are some certificates that don’t accept the import of private key using C# code. I didn’t understood why the private key is not imported, but it seems that there are cases when the private key is not persisted. In this situations, the only solution that I found is to make a step backward and use command line commands. You will need to use two different commands.
 The first command install the certificate in the local store:
certutil -user -privatekey -addstore my [certificatePath]
The second command will “repair” the certificate, basically the private key is added to the certificate.
certutil -user -repairstore my "[certThumbprint]"
To be able to use the second command you will need to specify the thumbprint of the certificate. This can be obtain very easily from the store (X509Store). Each command like command can run from C# using “Process.Start(command,paramas);”.
The most important command, when you need to assign a private key to a certificate that don’t has a private key is the second one.
Good luck!
Read More
Posted in | No comments

Wednesday, 2 October 2013

How to get a certificate from a PKCS7 file

Posted on 02:14 by Unknown
In the last period of time I was forced to play a lot with certificates from .NET code (in my case was C#). The good part when you are using .NET is that you have an API that can be used to manage system certificates.
It is pretty simple to load a certificate from certificates store, file or an array of bytes.
X509Certificate2 cert = 
new X509Certificate2(bytes);
In the above example I am loading a certificate from an array of bytes. The code is pretty simple and works great.
There are situations when you receive a certificate from an external system. In this cases the first step that you need to do is to save the certificate local. When you are a 3rd party and you are receiving a certificate from a server you should be aware of the format of the certificate.
In my case I received a certificate signed in PKCS7 byte array without knowing this and I assumed that it is enough to load the certificate using the constructor. The funny thing is that you will be able to load a signed PKCS7 file using the certificate constructor without any kind of problems, but you will not load the certificate that you expect.
This is happening because PKCS7 signed filed can have more than one certificate and X509Certificate2 constructor will load the certificate that was used to signed the store rather than certificates that can be found in the rawData of PKCS7.
To be able to access and load a certificate from the rawData of PKCS7 file you will need to use SignedCms. This class give you the possibility to access a message in PKCS7 format. This is the best and simple way to access all the certificates from a signed file.
SignedCms cms = new SignedCms();
cms.Decode(bytes);
x509certificate2Collection certs = cms.Certificates;
The Certificates property is a X509CertificateCollection that contains all the certificates from the signed file.

Read More
Posted in | No comments

Microsoft Summit 2013, 6-7 November in Bucharest

Posted on 01:39 by Unknown
Did you heard about Microsoft Summit 2013? This year Microsoft Romania will organize an event dedicated to IT. There will be special tracks for Developers, IT Professional and Business Managers.
The speaker list contains interesting names like David Chappell, Chris Capossela or Beat Schwegler.
At this summit I will have two session where I will talk about how you can create and run load tests using the new features from Visual Studio 2013 and Windows Azure. In the second session we will talk about choosing the right path when you are developing Windows Phone 8 and Windows 8 applications.
I invite you to visit and register to this summit: www.mssummit.ro

See you there!
Read More
Posted in eveniment, event | No comments

MVP Award - Renewed

Posted on 01:23 by Unknown
I am pleased to announce that my Windows Azure MVP award was renewed.
Thank you for this award and I hope to continue to bring value to online and offline community.


Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Service Bus Topic - Automatic forward messages from a subscription to a topic
    Windows Azure Service Bus Topic is a service that enables us to distribute the same messages to different consumers without having to know e...
  • CDN is not the only solution to improve the page speed - Reverse Caching Proxy
    I heard more and more often think like this: “If your website is to slow, you should use a CDN.” Great, CDN is THE solution for any kind of ...
  • Content Types - Level 6: Rich Media
    Level 6: Rich Media NOTE: This is part 7 of 7 and the conclusion of this continuing series; please see earlier posts for more background inf...
  • Publishing our CellCast Widget for iPad
    The rush has been on this week as our development team worked to design a new version of our CellCast Widget specifically for Apple's up...
  • Patterns in Windows Azure Service Bus - Message Splitter Pattern
    In one of my post about Service Bus Topics from Windows Azure I told you that I will write about a post that describe how we can design an a...
  • E-Learning Vendors Attempt to Morph Mobile
    The sign should read: " Don't touch! Wet Paint !" I had a good chuckle today after receiving my latest emailed copy of the eLe...
  • SQL - UNION and UNION ALL
    I think that all of us used until now UNION in a SQLstatement. Using this operator we can combine the result of 2 queries. For example we wa...
  • Cum sa salvezi un stream direct intr-un fisier
    Cred ca este a 2-a oara când întâlnesc aceasta cerința in decurs de câteva săptămâni. Se da un stream și o locație unde trebuie salvat, se c...
  • Task.Yield(...), Task.Delay(...)
    I think that a lot of person already heard about these new methods. In this post I want to clarify some things about these new methods that ...
  • Content Types - Level 4: Reference
    Level 4: Reference Materials & Static Content NOTE: This is part 5 of 7 in a continuing series; please see earlier posts for more backgr...

Categories

  • .NET
  • .NET nice to have
  • #if DEBUG
  • 15 iunie 2011
  • 15 octombrie 2011
  • 2011
  • abstracta
  • action
  • adaugare
  • ajax
  • Amsterdam
  • Android
  • aplicatii
  • App Fabric
  • Apple iSlate
  • array
  • as
  • ASP.NET
  • AsReadOnly
  • Assembly comun
  • async
  • Asynchronous programming
  • asyncron
  • Autofac
  • AutoMapper
  • az
  • Azure
  • Azure AppFabric Cache
  • Azure backup solution
  • Azure Storage Explorer
  • azure. cloud
  • backup
  • BCP utility
  • bing maps v7
  • BitArray
  • BlackBerry
  • blob
  • BlobContainerPublicAccessType
  • breakpoint
  • bucuresti
  • C#
  • cache
  • CallerMemberName
  • CellCast
  • Certificate
  • CES
  • change
  • ChannelFactory
  • clasa
  • classinitialize
  • clean code
  • click event
  • close
  • Cloud
  • Cluj
  • cluj-napoca
  • Code contracts
  • code retrat
  • codecamp
  • CollectionAssert
  • Compact Edition
  • compara
  • Comparer T .Default
  • CompareTo
  • comparison
  • comunitate
  • concurs
  • Conditional attribute
  • configurare
  • connection string
  • container
  • content type
  • control
  • Convert
  • convertAll
  • convertor
  • cross platform
  • CRUD
  • css
  • custom properties
  • custom request
  • DACPAC
  • Daniel Andres
  • data sync service
  • database
  • date time
  • datetime
  • debug
  • default
  • delegate
  • dependency injection
  • deploy
  • DeploymentItem
  • design patterns
  • Dev de Amsterdam
  • development stoage
  • dictionary
  • diferente
  • digging
  • director
  • Directory.Exist
  • disable
  • dispatcher
  • dispose
  • dropdown
  • dynamic
  • EF
  • email
  • encoding
  • entity framework
  • enum
  • enumerable
  • Environment.NewLine
  • error
  • error 404
  • error handling
  • eveniment
  • event
  • ews
  • excel
  • exception
  • exchange
  • exita
  • explicit
  • export
  • extension
  • field
  • File.Exist
  • finalize
  • fire and forget
  • Fluent interface pattern
  • format
  • func
  • GC.SuppressFinalize
  • generic
  • getdirectoryname
  • globalization
  • gmail
  • hackathon
  • Hadoop
  • handle
  • HTML
  • html 5
  • Html.ActionLink
  • http://www.blogger.com/img/blank.gif
  • HttpModule
  • IComparable
  • IE
  • ienumerable
  • IIS
  • image
  • implicit
  • import
  • int
  • internationalization
  • Internet Explorer
  • interop
  • Ioc
  • IP Filter
  • iPhone
  • iQuest
  • IStructuralEquatable
  • ITCamp
  • itspark
  • java script
  • javascript
  • July 2012
  • KeyedByTypeCollection
  • KeyNotFoundException
  • Kinect SDK
  • lambda expression
  • LightSwitch Microsoft Silverlight
  • linq
  • list
  • lista
  • lista servicii
  • liste
  • Live Connect
  • Live ID
  • load
  • localization
  • lock
  • m-learning
  • MAC
  • Mango
  • map
  • mapare
  • mapare propietati
  • messagequeue
  • meta properties
  • method
  • MethodImpl
  • Metro App
  • Microsoft
  • Microsoft Sync Framework
  • mlearning
  • mlearning devices
  • Mobile Apps
  • mobile in the cloud
  • mobile learning
  • mobile services
  • Mobile Web
  • mongoDb
  • monitorizare
  • msmq
  • multitasking
  • MVC
  • MVC 3
  • MVVM
  • namespace
  • nextpartitionkey
  • nextrowkey
  • Ninject
  • nivel acces
  • no result
  • normalize
  • nosql
  • null expcetion
  • null object pattern
  • NullReferenceException
  • OAuth API
  • office
  • offline
  • Open ID
  • openhackeu2011
  • operations
  • operator
  • optimization
  • option
  • outputcache
  • OutputCacheProvider
  • override
  • paginare
  • pagination
  • path
  • persistare
  • Portable Library tool
  • Post event – CodeCamp Cluj-Napoca
  • predicate
  • predictions
  • prezentare
  • process
  • proiect
  • property
  • propietati
  • query
  • ReadOnlyCollection
  • ReadOnlyDictionary
  • referinta
  • reflection
  • remote
  • reply command
  • request
  • request response
  • resouce
  • REST
  • REST Client
  • RESTSharp
  • ronua
  • rss
  • rulare
  • salvare in fisier
  • sc
  • schimbare timp
  • select
  • select nodes
  • send
  • serializare
  • serialization
  • Server.Transfer. Resposen.Redirect
  • service bus
  • ServiceBase
  • servicecontroller
  • sesiune
  • session
  • Session_End
  • Session_Start
  • setup
  • Sibiu
  • signalR
  • Silverlight
  • sincronizare
  • Single Responsibility Principle
  • SkyDrive
  • skype
  • smartphones
  • smtp
  • Snapguide
  • sniffer
  • socket
  • solid
  • spec#
  • sql
  • Sql Azure
  • SQL CE
  • sql server 2008 RC
  • SRP
  • startuptype
  • stateful
  • stateless
  • static
  • stergere
  • store
  • store procedure
  • stream
  • string
  • string.join
  • struct
  • StructuralEqualityComparer
  • submit
  • switch
  • Symbian
  • Synchronized
  • system
  • tabele
  • table
  • techEd 2012
  • tempdata
  • test
  • testcleanup
  • testinitialize
  • testmethod
  • thread
  • timer
  • ToLower
  • tool
  • tostring
  • Total Cost Calculator
  • trace ASP.NET
  • transcoding
  • tuplu
  • tutorial
  • TWmLearning
  • type
  • unit test
  • unittest
  • UrlParameter.Optional
  • Validate
  • validation
  • verificare
  • video
  • view
  • ViewBag
  • virtual
  • visual studio
  • VM role
  • Vunvulea Radu
  • wallpaper
  • WCF
  • WebBrower
  • WebRequest
  • where clause
  • Windows
  • windows 8
  • Windows Azure
  • Windows Azure Service Management CmdLets
  • windows live messenger
  • Windows Mobile
  • Windows Phone
  • windows service
  • windows store application
  • Windows Task
  • WinRT
  • word
  • workaround
  • XBox
  • xml
  • xmlns
  • XNA
  • xpath
  • YMesseger
  • Yonder
  • Zip

Blog Archive

  • ▼  2013 (139)
    • ►  November (17)
    • ▼  October (12)
      • Events, Delegates, Actions, Tasks with Mutex
      • Digging through SignalR - Commands
      • Story - Developers that don’t understand the busin...
      • Custom Content Delivery Network Mechanism using Cl...
      • CMS application for 3 different mobile platforms –...
      • How to track who is accessing your blob content
      • Certificates and resource management
      • Feature of a Smart Devices that I miss
      • How to assign a private key to a certificate (from...
      • How to get a certificate from a PKCS7 file
      • Microsoft Summit 2013, 6-7 November in Bucharest
      • MVP Award - Renewed
    • ►  September (10)
    • ►  August (7)
    • ►  July (8)
    • ►  June (15)
    • ►  May (12)
    • ►  April (17)
    • ►  March (16)
    • ►  February (9)
    • ►  January (16)
  • ►  2012 (251)
    • ►  December (9)
    • ►  November (19)
    • ►  October (26)
    • ►  September (13)
    • ►  August (35)
    • ►  July (28)
    • ►  June (27)
    • ►  May (24)
    • ►  April (18)
    • ►  March (17)
    • ►  February (20)
    • ►  January (15)
  • ►  2011 (127)
    • ►  December (11)
    • ►  November (20)
    • ►  October (8)
    • ►  September (8)
    • ►  August (8)
    • ►  July (10)
    • ►  June (5)
    • ►  May (8)
    • ►  April (9)
    • ►  March (14)
    • ►  February (20)
    • ►  January (6)
  • ►  2010 (26)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (4)
    • ►  March (1)
    • ►  February (1)
    • ►  January (14)
Powered by Blogger.

About Me

Unknown
View my complete profile