Windows Mobile Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Friday, 22 November 2013

Service Bus - Optimize consumers using prefetch and maximum concurent calls features

Posted on 12:41 by Unknown
From same version ago, Windows Azure Service Bus supports 'event notification'. This means that we can register to an event that will be triggered each time when a new message is available for us.
QueueClient client = QueueClient.Create("queue1");
client.OnMessage(
OnMsgReceived,
new OnMessageOptions());
...
void OnMsgReceived(BrokeredMessage message)
{
...
}

This is a great feature, that usually make our life easier. By default, when we are consuming messages in this way, we will make a roundtrip to the Service Bus for each message. When we have applications that handle hundreds of messages, the roundtrip to the server for each message can cost us time and resources.
Windows Azure Service Bus offer us the possibility to specify the number of messages that we want to prefetch. This means that we will be able to fetch 5, 10, 100, .. messages from the server using a single request. We could say that is similar to the batch mechanism, but can be used with the event notification feature.
QueueClient client = factory.CreateQueueClient("queue1");
client.PrefetchCount = 200;
We should be aware that the maximum numbers of messages that can be prefetch by one request is 200. I think that this an acceptable value, if we take into the consideration that this will trigger 200 notification events.
This value need to be set when we create and setup the subscription client. Theoretically you can set this value before receiving the first message from the server, but I recommend to make all this configuration at the setup and initialization phase.
When we are using this mechanism of prefetching, we need to know exactly how many concurrent calls  we can have. Because of this, the Service Bus client give us the possibility to specify how many concurrent calls we can have in the same time.
client.OnMessage(CalculateEligibility,new OnMessageOptions()
{
MaxConcurrentCalls = 100
});
For example if we would have the prefetch count set to 200, and the maxim concurrent calls sett to 100, we will have the following behavior:
Make a roundtrip to the server
Receive 200 messages (we suppose that there are 200 messages available)
Consume first 100 messages
Consume the other 100 messages
Messages are consumed in a async way. This means that from the first 100 messages, when 1 will be processed on the client, another message receive event will be triggered.
We saw in this post how we can consume the messages in parallel when using Windows Azure Service Bus and event notification. Using this features we can increase our application performance.
Good luck with Windows Azure Service Bus.

PS: Bonus picture - This is a picture from Seattle airport, where I am now, waiting the flight back home.
Read More
Posted in Azure, Cloud, service bus, Windows Azure | No comments

Thursday, 21 November 2013

Extract relative Uri using MakeRelativeUri method

Posted on 10:59 by Unknown
Did you ever need to compare two web address and extract the relative part of them?
For example if we have address "http://foo.com" and "http://foo.com/car/color/power" we would like to get "car/color/power". But if we would have "http://foo.com/car" and "http://foo.com/car/color/power" we would like to get "color/power".
For this cases we should use "MakeRelativeUri". This method is part of Uri class and determines the delta between two URL address. This simple method extract the difference from two URL.
Code examples:

Uri uri = new Uri("http://foo.com");
Uri result = Uri.MakeRelativeUri("http://foo.com/car/color/power");
// result: "car/color/power"

Uri uri = new Uri("http://foo.com/car");
Uri result = Uri.MakeRelativeUri("http://foo.com/car/color/power");
// result: "color/power"

Uri uri = new Uri("http://foo.com");
Uri result = Uri.MakeRelativeUri("http://foo.com/car/color/power/index.html");
// result: "car/color/power/index.html"

Uri uri = new Uri("http://foo.com/car/color/power/");
Uri result = Uri.MakeRelativeUri("http://foo.com/");
// result: "../../../"

Uri uri = new Uri("http://foo.com");
Uri result = Uri.MakeRelativeUri("http://secondFoo.com/car/color/power");
// result: "http://secondFoo.com/car/color/power"

When the base address of the second URL is not the same with the first one (base one), the call of this method will return the second URL that was send as parameter.
This method can be very useful in certain situations, when you know about it. In one project I saw an extension implemented by developers that was doing the same thing :-).
Read More
Posted in | No comments

Tuesday, 19 November 2013

Sync Group - Let's talk about Performance

Posted on 17:21 by Unknown
In one of my latest post I talked about synchronization functionality that is available for SQL Azure. There was a question related of the performance of this service.
So, I decided to make a performance test, to see what are the performance. Please take into account that this service is in the preview and the performance will change when the service will be released.
For this test I had the following setup:
  • Database
    • Size 7.2 GB
    • 15 tables
    • 2 tables with more than 30.000.000 of rows (one table had around 3.2 GB and the other one had 2.7 GB)
    • 34.378.980 rows in total
  • Database instances
    • 1 DB in West Europe (Hub)
    • 1 DB in West Europe
    • 1 DB in North Europe
    • 1 DB in North Central US
  • Agent
    • 1 agent in West Europe
  • Configuration
    • Hubs win
    • Sync From Hub
Scenario One: Initialize Setup
I started from the presumption that your data were not duplicated yet on all the databases. First hit of the Sync button will duplicate the database schema of the tables that needs to be sync, table content and rest of resources to all the databases for the given table. This means that 7.2 GB were send to the 3 different databases.
Normally you can do this action in other ways. Exporting/Importing the database for example, but I wanted to see how long it takes to sync all the databases.
Sync action duration: 5 hours and 36 minutes (20160.17 seconds)
 
Scenario Two: Update 182 rows
In this scenario I updated 182 rows from one of the tables
Sync action duration: 53.63 seconds
 Scenario Three: No changes
In this case I triggered the synchronization action without any changes.
Sync action duration: 38.47 seconds
 Scenario Four: 23.767 rows updated
23767 rows were updated on the hub database.
Sync action duration: 1 minute and 16 seconds (76 seconds)
 Scenario Five: 4.365.513 rows updated
As in the previous scenario, I updated  I changed a specific number of rows.
Sync action duration: 1 minute and 41 seconds (101.6 seconds)
 Scenario Six: 76.353 rows deleted
From one of the tables I deleted 73.353 rows.
Sync action duration: 56.26 seconds
 
As we can see, the synchronization action itself takes a very short period of time. For 4.5M of rows that were updated, the synchronization action took less than 2 minutes. The only scenario that took a log period of time was the initial synchronization action. Usually this action is made only one time. Also we have other method to import the database content to all our database.
I would say that the performance of the sync service is very good and I invite all of you tot check it out. You have support for synchronization out of the box.
Great job!
Read More
Posted in Azure, Cloud, Sql Azure, Windows Azure | No comments

Thursday, 14 November 2013

[PostEvent] Slides from MSSummit 2013, Bucharest

Posted on 18:46 by Unknown
Last week I had the opportunity to participate at MSSummit. During this event I had the opportunity to present and talk SignalR and load testing using Windows Azure and Visual Studio 2013.
More about this event: http://vunvulearadu.blogspot.ro/2013/11/postevent-mssummit-2013-bucharest.html
You can find my session slides below:
Real time fluent communication using SignalR and Cloud (Windows Azure) from Radu Vunvulea


Load tests using visual studio 2013 and Cloud from Radu Vunvulea
Read More
Posted in eveniment, event | No comments

How to get the instance index of a web role or worker role - Windows Azure

Posted on 18:26 by Unknown
When we have one or more instances of a specific web role or worker role in the cloud, there are moments when we want to know from the code how many instances we have of the specific type or the index of current instance.
The total number of instances can be obtained using
RoleEnvironment.Roles[i].Value.Instance.Count
To be able to detect the index of the current instance we need to parse the id of the role instance. Usually the id of the current instance ends with the index number. Before this number we have ‘.’ character if the instance is on the cloud or ‘_’ when we are using emulator.
Because of this we will end with the following code when we need to get the index of the current instance:
int currentIndex = 0;
string instanceId = RoleEnvironment.CurrentRoleInstace.Id;
bool withSuccess = int.TryParse(instanceId.Substring(instanceId.LastIndexOf(".") + 1, out currentIndex));
if( !withSuccess )
{
withSuccess = int.TryParse(instanceId.Substring(instanceId.LastIndexOf("_") + 1, out currentIndex));
}
Take into account that when you increase and decrease the number of instances, there are situation when you can end up with the following name of the instances:
  • […].0
  • […].1
  • […].3
  • […].4
Two is missing because when we decreased the number of instances from 5 to 4, that instance was stopped.

Read More
Posted in Azure, Cloud, Windows Azure | No comments

Sync Group - A good solution to synchronize SQL Databases using Windows Azure infrastructure

Posted on 07:31 by Unknown
Working with Windows Azure becomes something that is more and more pleasant. In this post we will see how you can synchronize multiple databases hosted on Azure or on premise using Sync Group.
First of all, let’s start with the following requirement: We have an application that contains a database that needs to be replicated in each datacenter. What should we do to replicate all the content in all the datacenters?
A good response could be Sync Group (SDS). Using this feature we can define one or more instances of SQL Databases (from cloud or on premise) that will be synchronized. For this group we can specify the tables and columns that will be synchronized.
Creating a SDS can be made very easily from Windows Azure portal. This feature can be found under the SQL DATABASES tab – SYNC. I will not start to explain how you can create this group, because it is very easily, you only need to know the databases server address, user names and passwords.
I think that more important than this is the options that we have available on a SDS.

HUB
One of the databases that forms the group needs to be the hub. The hub represent the master node of the group, from where all the data propagates.
Synchronization Direction
Once we add the hub, we can add more databases to the group. In this moment we will need to specify the synchronization direction. In this moment we have 3 options:

  • From the Hub – All the changes that are made in the hub are replicated to the rest of the databases from the group. In this configuration, when data are different, the hub will win. Changes in the databases are not written to the hub
  • To the Hub – Al changes from the hub are not written in the databases. All changes that are made in the databases are written in the hub
  • Bi-directional – The synchronization is made in both ways – from Hub to databases and from databases to hub


Synchronization Rules
From the portal, we have the option to select the tables (and columns) from the hub table that will be synchronized. In this way you don’t need to have databases that has the same schema. The most important thing is to have the tables that you want to synchronize in the same schema/format.
Remarks: We don’t need to replicate the database schema to all the databases. Once we select the tables and columns that we want to synchronize (from Hub schema), all this tables will be replicated in the rest of the group.

Conflict Resolution Policies
When we are creating a hub, we have two option for conflict resolution.

  • Hubs Wins – In this case all the changes that are written to the hub will be persisted and in a case of the conflict, the version that is on the hub will be the ‘good one’
  • Client Wins – In tis case the changes that are written on the slaves (non-hub database) will win and the change from the slave will propagate to the hub and to the rest of the group.

For more information about this conflict resolution policies I recommend to search on MSDN.

Synchronization Frequency
The synchronization between the databases of a group is not made in real time. We can select the time interval when the synchronization needs to be made. This time interval can be between 5 minutes and 1 month. Also, we have a button that can trigger the synchronization action.

On-premise SQL Server
To be able to use this feature on SQL Server you will need to download and install SQL Data Sync. This is a tool that will integrate this functionality on SQL Server.

Logs
All the synchronization action that are between group nodes are logged. Using this information we can determine how long the synchronization action took, what nodes were synchronized and how the action ended.

I think that this feature has a lot of potential. Why? Because database synchronization can be made very easily now. This feature can really add value to your application with minimal costs and headaches.
Read More
Posted in Azure, Cloud, sql, Sql Azure, Windows Azure | No comments

Throttling and Availability over Windows Azure Service Bus

Posted on 02:15 by Unknown
In today post we will talk about different redundancy mechanism when using Windows Azure Service Bus. Because we are using Windows Azure Service Bus as a service, we need to be prepared when something goes wrong.

This is a very stable service, but when you design a solution that needs to handle millions of messages every day you need to be prepared for word case scenarios. By default, Service Bus is not geo-replicated in different data centers, because of this if something is happening on the data center where your namespace is hosted, than you are in big troubles.
The most important thing that you need to cover is the case when the Service Bus node is down and clients cannot send messages anymore. We will see later on how we can handle this problem.
First of all, let’s see why a service like Service Bus can go down. Well, like other services, this has dependencies to databases, storages, other services and resources. There are cases when we can detect pretty easily the cause of the problem.
For example when we receive ‘ServerBusyException’, then we know that the service don’t have enough resources (CPU, memory, …) and we need to retry later. The default retry period is 10 seconds. It is recommended to not set a value under 10 seconds.
This problem can be resolved pretty easily with partition. When we are using partitioning, a topic or a queue is spitted on different messages brokers. This means that we have less chances to have our service down. Also, if something happen with one of our brokers, we will still be able to use the topic/queue without any kind of problems. Don’t forget that brokers will be on the same data center. Using this feature don’t increase your costs.
Enabling this feature can be done in different ways. One option is from portal, Visual Studio Server Explorer or from code.
NamespaceManager namespaceManager = NamespaceManager.CreateFromConnectionString("...");
TopicDescription topicDesc = new TopicDescription("[topicName]")
{
EnablePartitioning = true;
}

namespaceManager.CreateTopic(topicDesc);
It is so simple to use it. You should know that in this moment you can have maximum 100 topics/queues per namespace that has this feature activated, but I expect this value to change in the future. You should also know different behaviors that are happing when you are using sessions:

  • Partition Key – Messages from the same transaction, that has the same partition key, but don’t has a session id will be send to the same broker.
  • Session Id – All messages with a specific session if will be send to the same broker.
  • Message Id – Messages that are send to a queue/topic with duplicated detection activated will be send to the same broker. Because of this I recommend to use messages duplication detection only where is necessary.
  • None of above – Messages are send to all brokers in a round robin manner – one message to each broker.

Another downtown cause can be Service Bus service is upgraded. In this cases, the service will still work, but we can have 15-20 minutes latency until the message will appear in the queue/topic. The most important thing in this case is “We don’t lose any message”.
Also when the system is not stable (internal causes), brokers will be automatically restarted. The restart can take one or more minutes. In this case the service will throw MessagingException or TimeoutException. This problem are resolved build in by the clients SDK’s (if you are using .NET SDK). They have a retry policy build-in, that will retry to resend the message. If the retry policy is not able to send the message, an exception is throw that can be handled in different ways. Until now, all the issues related to this were handle by retry policy with success.
Custom configuration of retry policy can be made in the factory class of messaging.
MessagingFactory messagingFactory = MessagingFactory.Create();
messagingFactory.RetryPolicy = RetryExponential.Default;
The last main cause of failing is external causes like internet connectivity problem, electrical outage or human errors. This problem is handled with a very different approach. The client needs to detect this problem and handle it. Until now, this required a custom code to be written, that would redirect the messages to a topic/queue that is in another datacenter (namespace).
From now we can use paired namespace to handle this scenario. The paired namespace give us the possibility to specify a second namespace (that can be in a different data center) that will be used to send messages until the primary one will be up and running. When messages are send to the second namespace, messages will be persisted until the primary namespace will be up. In the moment when the primary namespace is running, all messages from the second one will need to be redirected to the first one. We can imagine secondary namespace as a buffer that is used to store messages until our main namespace is in good state.
 When we configure this feature, we can set also the failover interval. This is the time interval when our system will accept failovers before switching to the second namespace. The recommended (and default) value is 10 seconds. Also you will need to specify the number of queues that are used to store the messages in the secondary namespace (default value is 10). This value should be greater or equal to 10.
The last option that you should be aware is syphon (‘enableSyphon’ parameter). When you activate this on a client, you tell to the system that this is the system that will transfer the messages from the second namespace to the first one. Usually this value should be set on the consumers clients (backend), because usually clients only send messages to the topics/queues.
NamespaceManager primaryNM = NamespaceManager.CreateFromConnectionString("...");
MessagingFactory primaryMF = ...
NamespaceManager secondaryNM= NamespaceManager.CreateFromConnectionString("...");
MessagingFactory secondaryMF = ...
SendAvailabilityPairedNamespaceOptions sao=
new SendAvailabilityPairedNamespaceOptions(secondaryNamespaceManager, secondaryMF);
primaryMF.PairNamespaceAsync(sao).Wait();
There are some small things that we should know related to this feature:

  • The state and order is guaranteed only in the first queue. When using session, the order of the messages is not guaranteed when secondary namespace is used
  • Messages are consumed only from the primary queue/subscription
  • You will pay the extra cost of moving messages from the secondary namespace to the primary one
  • The default name of the queues that are created on the secondary namespace is ‘x-servicebus-transfer/i’ (where ‘i’ can have a value from 0 to n)
  • The queue from the secondary namespace is randomly chosen
  • It is not recommended to change the configuration of the queues from the secondary namespace



We saw that we have different mechanism to handle this special scenarios. We don’t have a mechanism that handle all this use cases. Before starting to think about integrating all this feature ask yourself if you need all of them? There are cases when a 10-15 minutes downtime is acceptable.

Read More
Posted in Azure, Cloud, service bus, Windows Azure | No comments

Tuesday, 12 November 2013

How to monitor clients that access your blob storage?

Posted on 09:29 by Unknown
Some time ago I wrote about the monitor and logging support available for Windows Azure Storage. In this post we will talk about how we can use this feature to detect what clients are accessing storage.
Let’s assume that we have a storage that is accessed by 100.000 users. At the end of the month we should be able to detect the users that downloaded a specific content with success.
What should we do in this case?
(Classic Solution) Well, in a classic solution, we would create an endpoint that will be called by the client after he download with success the specific content. In this case we would need to create a public endpoint, hosted it, persists the calls messages, manage and maintain the solution and so on.
In the end we would have additional costs.


What should we do in this case?
(Windows Azure Solution) Using Windows Azure Storage, we can change the rules of the game. Windows Azure Storage offer us out of the box support for logging mechanism. All the request that are made to our storage will be logged.

Based on this idea, we activate this feature and log all the access to the storage. In the following example you can see how the logs look like for a file called ‘m.txt’ under a container named ‘container’.
1.0;2013-11-12T13:03:23.4277012Z;GetBlob;AnonymousSuccess;200;7;7;anonymous;;radudemo;blob;"http://radudemo.blob.core.windows.net/container/m.txt";"/radudemo/container/m.txt";98ebc2d2-249f-47d2-8a3b-e7bbcb5e3c59;0;86.124.100.155:21131;2009-09-19;379;0;284;7;0;;;"0x8D0ADBEAF6D6A3C";Tuesday, 12-Nov-13 13:03:15 GMT;;"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36";;
As you can see, we have all the necessary information to identify the client, inclusive the client IP. But we want to do more than that. We want to be able to detect clients based on aware own identification mechanism.
To be able to do something like this, we have to options:
Query parameters
We can add a custom query parameter that represent the client unique ID. For example we can make the following requests
http://radudemo.blob.core.windows.net/container/m.txt?clientID=123
In this case, the client will receive the file and in the logs we will have:
1.0;2013-11-12T13:03:36.3117012Z;GetBlob;AnonymousSuccess;200;14;14;anonymous;;radudemo;blob;"http://radudemo.blob.core.windows.net/container/m.txt?clientID=123";"/radudemo/container/m.txt";c90e928f-f3e7-4c4f-859b-4bb0051f0271;0;86.124.100.155:21131;2009-09-19;392;0;284;7;0;;;"0x8D0ADBEAF6D6A3C";Tuesday, 12-Nov-13 13:03:15 GMT;;"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36";;
In the client URL we can find the clientID that can be filter, processed and so on.
User Agent
Another option is to set a custom user agent. Using a custom user agent it will be very easily for us to identify witch client made the requests.
1.0;2013-11-12T13:03:36.3117014Z;GetBlob;AnonymousSuccess;200;14;14;anonymous;;radudemo;blob;"http://radudemo.blob.core.windows.net/container/m.txt";"/radudemo/container/m.txt";c90e928f-f3e7-4c4f-859b-4bb0051f0271;0;86.124.100.155:21131;2009-09-19;392;0;284;7;0;;;"0x8D0ADBEAF6D6A3C";Tuesday, 12-Nov-13 13:03:15 GMT;;"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 ClientID/123";;

Both solutions are similar, we are only sending the unique client id in different locations. This files can be very easily processed and analyzed using different solutions. A good solution when you have a lot of logs file is by Hadoop.
The advantages of using such a solution is the out of the box support. You don’t need to create and manage another system for this. Also you eliminate the case when the client is able to download/access the content but is not able to confirm the download.
The only additional costs that are added to a classic solution is the storage cost for storing the logs data itself, plus the transactions costs. This costs are very low in comparison with the case when you would create all the infrastructure for logging. In both cases you would need to pay the storage costs of logs.
Read More
Posted in Azure, Windows Azure | No comments

Digging through SignalR - Dependency Resolver

Posted on 03:56 by Unknown
I’m continuing the series of post related to SignalR with dependency injection.
When having a complicated business you can start to group different functionalities in classes. Because of this you can end up very easily with classes that accept in constructor 4-5 or even 10 parameters.
public abstract class PersistentConnection
{
public PersistentConnection(
IMessageBus messageBus, IJsonSerializer jsonSerializer,
ITraceManager traceManager, IPerformanceCounterManager performanceCounterManager,
IAckHandler ackHandler, IProtectedData protectedData,
IConfigurationManager configurationManager, ITransportManager transportManager,
IServerCommandHandler serverCommandHandler, HostContext hostContext)
{

}
...
}
Of course you have a decoupled solution that can be tested very easily, but in the same time you have a fat constructor.
People would say: “Well, we have dependency injector, the resolver will handle the constructor and resolve all the dependencies”. This is true, the resolver will inject all the dependencies automatically.
In general, because you don’t want to have a direct dependency to a specific dependency injector stack, people tend to create a wrapper over dependency resolver. The same thing was done in SignalR also.
public interface IDependencyResolver : IDisposable
{
object GetService(Type serviceType);
IEnumerable<object> GetServices(Type serviceType);
void Register(Type serviceType, Func<object> activator);
void Register(Type serviceType, IEnumerable<Func<object>> activators);
}
Additionally, they done something more. In ctor, they don’t send all the dependencies that are already register in the IoC container. They send directly the dependency resolver, which will be used by the class itself to resolve all the external dependencies.
public abstract class PersistentConnection
{
public PersistentConnection(IDependencyResolver resolver, HostContext hostContext)
{
Initialize(resolver, hostContext);
}

public virtual void Initialize(IDependencyResolver resolver, HostContext context)
{
...
MessageBus = resolver.Resolve<IMessageBus>();
JsonSerializer = resolver.Resolve<IJsonSerializer>();
TraceManager = resolver.Resolve<ITraceManager>();
Counters = resolver.Resolve<IPerformanceCounterManager>();
AckHandler = resolver.Resolve<IAckHandler>();
ProtectedData = resolver.Resolve<IProtectedData>();

_configurationManager = resolver.Resolve<IConfigurationManager>();
_transportManager = resolver.Resolve<ITransportManager>();
_serverMessageHandler = resolver.Resolve<IServerCommandHandler>();
...
}

...
}
It is important to notify that you don’t need to inject everything through the resolver. You can have specific dependencies injected directly by constructor. For example, HostContext is something specific for each connection. Because of this is more natural to send this context using the constructor. Is something variable that is changing from one connection to another.
Why is the best approach to this problem?
It cannot say that one is better than another. Using this solution, the constructor itself will be lighter, but in the same time you add dependency to the resolver. In a perfect world you shouldn’t have constructors with 7-10 parameters… but when you have cases like this, this solution could be pretty interesting.
Read More
Posted in digging, signalR | No comments

Monday, 11 November 2013

[Event] Global Day of Coderetreat in Cluj-Napoca! - December 14th, 2013 - Codecamp

Posted on 23:36 by Unknown
We invite you to the Global Day of Coderetreat in Cluj-Napoca!
Global Day of Coderetreat is a world-wide event celebrating passion and software craftsmanship. Last year, over 70 passionate software developers in Cluj-Napoca joined the 2000 developers in 150 cities around the world in spending the day practicing the craft of software development using the coderetreat format. This year, we want to repeat the experience and extend it even more.
We will focus our practice on XP techniques like pair programming, unit testing,  or TDD and we'll give special attention to OOD and good code in general.
Find more about this event! Reserve your seat, here!

Co-organized by: RABS, Code Camp, Cluj.rb, Agile Works and Functional Programmers Cluj-Napoca

Read More
Posted in codecamp, eveniment, event | No comments

Windows Azure Service Bus - What ports are used

Posted on 03:04 by Unknown
Windows Azure Service Bus is a great mechanism to distribute messages across components of your own system or to different clients. When you want to use this service for enterprise projects you will meet the IT guys.
For them it is very important to control the ports that are used by applications. Because of one of the first’s question that they will ask you is:
What are the ports that are used by Service Bus?
When this question is asked you should be prepare to have the answer prepared. Looking over the documentation from MSDN, the following ports are used by Windows Azure Service Bus:
  • 80 – HTTP connection mode
  • 443 – HTTPS connection mode
  • 5671 – Advanced Message Queuing Protocol (AMQP)
  • 5672 – AMQP
  • 9350 – WCF with connection mode Auto or TCP using .NET SDK
  • 9351 – WCF with connection mode Auto or TCP using .NET SDK
  • 9352 – WCF with connection mode Auto or TCP using .NET SDK
  • 9353 – WCF with connection mode Auto or TCP using .NET SDK
  • 9354 – WCF with connection mode Auto or TCP using .NET SDK
I recommend to use HTTP/HTTPS connection where is possible. IF you are using .NET SDK you don’t need to make any custom configuration. When you let connection mode set to AutoDetect, the SDK will check if a connection can be made using non-HTTP ports. If the endpoint cannot be reached, that he will try to go over HTTP.
If you want to control the connection method, than you will need to set the ‘Mode’ property of the SystemConnectivity. The supported mode are:

  • AutoDetect
  • Http
  • Tcp
ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http;
Enjoy!
Read More
Posted in Azure, Cloud, service bus, Windows Azure | No comments

Friday, 8 November 2013

Debugging in production

Posted on 23:42 by Unknown
Abstract
In this article we have discovered how we can create a dump and which the basic tools to analyze it are. By means of dump files we can access the information we could not normally access. Some data can only be accessed through these dumps and not by other means (Visual Studio debug).
We could state that these tools are very powerful, but they are rather difficult to use, as they require quite a high degree of knowledge.

How many times has it happened to you to have a problem in production or in the testing environment which you are unable to reproduce on the development machine? When this thing happens, things can go off the track, and we try out different ways of remote debugging. Without even knowing, helpful tools can be right near at hand, but we ignore them or simply don’t know how to use them.
In this article I will present different ways in which we can debug without having to use Visual Studio.
Why not using Visual Studio?
Though Visual Studio is an extremely good product, which helps us when we need to discover bugs and to debug, it won’t be of great help to us in production. The moment when we have a bug in production, the rules of the game change. In production, the application is compiled for release, and debugging is no longer possible.
When do we need these tools?
The moment we cannot reproduce the problem on our development machines. No matter what we do, we are not able to reproduce the problem we are dealing with. Since we cannot reproduce the problem, it’s like looking for a needle in haystack.
If by chance the problem appears, but we do not have a reproduction scenario, we find ourselves back in the situation above mentioned.
Another case is when the memory occupied by our application increases in time, the phenomenon emerging only on the production machines. We can only guess what the problem is, but we do not know the exact cause. That is why we may “fix” totally different code areas.
What solutions do we have?
Generally, there are two possibilities at hand. The first one is entirely based on logs. Through logs we are able to identify the application areas which do not function appropriately. But using logs can be two-edged. It is required that you know what exactly must appear in logs and how often. Otherwise, you may end up with thousands of pages of useless and almost impossible to analyze logs. If we end up having too many logs, we may be taken by surprise by the alteration of the behavior of the application.
When it is possible, we can send the PDBs on the production machine. This way we will have access to the entire stack trace generated by an exception.
Logs can be of great help for us to solve different problems that emerge in production. But even if logs are very useful, they won’t help us every time. There are different problems which can appear and which are extremely difficult to identify by using logs. For example, a dead-lock would be almost impossible to identify by means of logs.
Another alternative that is available for us is creating memory dumps and analyzing them.
What is a memory dump?
A memory dump is a snapshot of the process on a certain moment. Besides the information regarding the allocation of memory, a snapshot also contains information on the state of different threads, objects and cod. By using this information we can obtain very valuable information regarding the process that is running. This snapshot represents the image of the memory in 32 or 64 bits format, depending on the system.
Generally, there are two types of memory dump. The first one is minidump. This is the most uncomplicated memory dump that can be done, which consists of mere information on the stack – the state of the process or on the calls that are made and so on.

The second type of memory dump is full dump. It contains all of the information that can be obtained, including a snapshot on memory. It takes much longer to obtain a full dump compared to a minidump and the dump file itself is much bigger.
How can we generate a memory dump?
There are different applications which allow us to do this. Some of them allow us to automatically generate a dump, according to different parameters.
In the case we need to generate a memory dump, the easiest solution is the Task Manager. All we have to do is to click right on a process and select “Create dump file”. We can do the same thing also by using Visual Studio or “adplus.exe”. The last alternative is a debug tool for Windows which can be found on almost all machines on which Windows runs.
In the following example, we place an order in adplus to create a memory dump at this moment:
adplus –hang –o C:myDump –pn MyApp.exe
By means of pn option we specify the name of the process for which we wish to create a dump. If we want to create a dump automatically we can use the –crash option.
adplus –crash –o C:myDump –pn MyApp.exe
adplus –crash –o C:myDump –sc MyApp.exe
If it is necessary for us to automatically create a dump, besides “adplus.exe” we can use DebugDiag and “clrdmp.dll”. The three options we have in order to automatically create a dump are rather similar. DebugDump allows us to set up the system so that it automatically generates a memory dump the moment when the CPU level is higher than X% within a certain time span.
Besides these tools there are many others on the market. Depending on your requirements, you can use any tool of this type.
How do we analyze a dump?
The native debugger for a dump is represented by Windbg. This is a powerful tool, by means of which one can get very valuable information. The only problem of this tool is that it is not very friendly. We will see a little later what the alternatives to Windbg are. We must remember that in almost all the cases, the alternatives to Windbg are using this debugger behind – it’s just that they display a friendlier and more useful interface.
An alternative to Windbg is any Visual Studio that is more recent than Visual Studio 2010. Beginning with Visual Studio 2010, they offer us the possibility to analyze the dumps for .NET 4.0+. What we can do in Visual Studio is not as advanced as what Windbg allows us to do, but generally it can suffice.
Windbg

The first step we need to take after opening Windbg is to upload a dump (CTRL+D). Once uploaded, a dump can be visualized in different manners. For example, we can analyze the threads, the memory, the allocated resources and so on.
In order to be able to do more, for instance to visualize and analyze the managed code, we need to upload additional libraries such as Son of Strike (SOS) or Son of Strike Extension (SOSEX). These two libraries open new doors for us, as they are able to analyze the data from the dump in an extremely useful way.
Son of Strike (SOS)
SOS allows us to visualize the process in itself. It allows us to access the objects, threads and information from the garbage collector. We can even visualize names of variables and their value.
One must know that all the information that can be accessed is part of the managed memory. Therefore, SOS is highly connected to CLR and its version. When we upload the SOS module, we must make sure we are uploading the one that is correspondent to the .NET version of our application.
.loadby sos mscorks
.loadby sos clr
It the examples above, we have uploaded the SOS module for .NET 3.5-, and in the second example, we have uploaded SOS for .NET 4.0+.
All the SOS orders start with “!”. The basic order is “!help”. If we wish to visualize the threads list, we can employ the “!threads” order which has an output that is similar to the following:
0:000> !threads
ThreadCount: 5
UnstartedThread: 0
BackgroundThread: 2
PendingThread: 0
DeadThread: 0
Hosted Runtime: no
Lock
ID OSID ThreadOBJ Count Apt Exception
…
Debug a crash
So far we have seen there are many tools available for us to create and analyze a dump. Time has come now to see what we have to do in order to be able to analyze a crash.

  • 1. Launch the process
  • 2. Before it “crashes”, we order adplus to create a dump the moment when the process “crashes”
  • adplus –crash –pn [numeProcesor]
  • 3. Launch Windbg (after the crash)
  • 3.1 Upload the dump
  • 3.2 Upload SOS
  • 3.3 !threads (to see which thread has crashed)
  • 3.4 !PrintException (on the thread that has crashed in order to see the exception)
  • 3.5 !clrstack (to see the stack of calls)
  • 3.6 !clrstack –a (to see the stack together with the parameters)
  • 3.7 !DumpHeap –type Exception (it lists all the exceptions that are not related to GC).

One must know that the results are according to the way in which the application is compiled. For instance, if there has been a code optimization performed during the compilation. Moreover, the exception list we can get may be quite long due to some orders such as !DumpHeap, which returns all the exceptions encountered – even the ones which have been pre-created, such as ThreadAbord.
How do we identify a deadlock?
A deadlock emerges when two or more threads are waiting for the same resource. In these cases, a part of the application, if not the entire application, gets blocked.
In this case, the first step is to create a dump using the order:
Addplus –hang –o –c:myDump –pn [NumeProces]
Then, it is necessary for us to analyze the stack trace for each thread and see whether it is blocked (Monitor.Enter, ReadWriteLock.Enter…). Once we have identified these threads, we can find the resources used by each thread, together with the thread that keeps these resources blocked.
For these final steps, the order “!syncblk” comes to our help. It lists for us the units of memory for a certain thread.

This article was written by Radu Vunvulea for Today Software Magazine.
Read More
Posted in eveniment, event | No comments

Simple load balancer for SQL Server Database

Posted on 04:49 by Unknown
Two weeks ago I started to work on a PoC where the bottle neck is the database itself. We had a lot of complicated and expensive queries over the database. Because of this, if we wanted to have good performance we had to find a solution to have more than one instance of SQL Server.
Because the solution was on Windows Azure, we wanted to test different configurations and solution. We wanted to see what the performance is if we use 1, 2 and 3 instances of SQL endpoints. Not only this, we had different times of SQL endpoints –Azure SQL (SAS), Virtual Machine with SQL Server and SQL Azure Azure SQL Premium. (SAS but with dedicated resources).
 The good part in our scenario was that the data don’t change very often. We could start from the assumption that the database will be updated only one time per day. Because of this we could use different methods to create a load balancing from our SQL instances.
Because the time was very limited and the only think that we needed was to distributed the load on all the machines we decided to replicate the data on all the SQL instances and create a simple load balancer from code. We divided the tick time (timpstamp) with the number of instances and based on this we selected the SQL instance that will be hit. A simple mechanism that can be very easily managed for the PoC. Of course the final solution will be more complicated, but for a PoC this solution was perfect. In 10 minutes we had a load balancer  :-).
We checked the load of each machine and we can say that the queries were pretty good distributed. The load of each machine was almost the same (+/-10%)

What others solution we could use?
Master-Slave – In this case we have multiple instances of SQL servers. Each slave can be used for reading operation and the master will be used for writing operations.
Sharding – The solution that we used to resolve our problem
AlwaysOn – The base concept is the same one as for Master-Slave
Transaction Replication – Is based on a snapshot and when data is changing, all the subscribers are notified about the change. You can filter individual database objects and publish changes to subscribers
Peer-to-Peer Replication – Is based on Transaction Replication, but with some new features
Change Tracking – Is a very base and simple tracking mechanisms.  The only thing that is notifing you is that a specific row had change. You don’t know any information related to the old or new value
Change Data Capture – Writes in the logs all the changes that are made over a table and are used to track and synchronize the rest of the database instances

Conclusion
We have a lot of solutions on the marker and for SQL Server. Based on our needs we needed to select the best one for us. I was surprised to see that a simple load balancer, written in 5 lines of code can do a pretty good stuff.

Read More
Posted in Azure, Cloud, sql, Windows Azure | No comments

Thursday, 7 November 2013

[PostEvent] MSSummit 2013, Bucharest

Posted on 14:14 by Unknown
WOW Why? Because last two days (6-7 November 2013) I had the opportunity to participate at MSSummit 2013, that was held in Bucharest. It was one of the biggest IT event organized in Romania this year.
There were more than 1000 people, 5 simultaneous tracks and more than 45 speakers. It’s been a while since I’ve seen such an event in Romania. I was impressed about the event itself, location and the number of people.
The session list was extremely interesting, we had session held by David Chappell, Chris Capossela, Michael Palermo and so on. All the sessions were great, we had the opportunity to learn and discover new stuff. After this event we could say that Microsoft is full of surprising things (in the good way).
At this event I was invited as a speaker and I held two sessions where I talked about how we can write and load tests using Visual Studio 2013 and Windows Azure. In the second session I talked about fluent communication between client – server – client using SignalR.
Special thanks for the attendees that stayed until 18:30 in the last day of the event.
I hope that next year Microsoft Romania will organize another MSSummit. This is a great conference that can grow and become very big and powerful.
In this following picture I’m with my colleagues from iQuest Group.




Read More
Posted in eveniment, event | No comments

Monday, 4 November 2013

How to read response time when you run a performance test

Posted on 18:58 by Unknown
Measuring the performance of an application is mandatory before releasing it. Measuring the performance of a PoC is also mandatory to validate the base concepts and ideas.
Performance of a system can be measured in different ways, from the process time, to numbers of users, processor level, memory level and so on. Before starting a performance test you should know exactly what you want to measure.
When you want to measure the response time of a specific service/endpoint you should be aware how to interpret the results. Statistical information can be saw in different ways. Each of this view can give you a different perspective of the results.

Average
The average of the response time is calculated. This information is important if you want to know what is the average request time. Even if this information can be very useful, this information is misleading. For example from 1000 requests, you can have an average response time of 16 seconds even if you your chances to have a  requests that takes under 10 seconds is around 90% (you can have 10 requests that takes 300 seconds and 900 requests that takes only 10 seconds).

Distribution
This view will give you the possibility to see what is the distribution of requests based on the response time. For example you will be able to know that from 1000 requests, 300 requests took 1 seconds, 500 took 2 second, 100 took 9 second and so on.
When you are measuring the scalability of a system, distribution of response is more important than average response time. This information will help you to understand how the requests time change based on different configuration. You can have cases when the average time to be the same, but the distribution of the response time to be very different.
Here we could talk also about mean and standard deviation time.

Min/Max
When the response time needs to be in a specific time interval (usually a max of X) the min/max of request will offer you this data.

This days I had to measure the performance of a database and how scalable it is. When we started to measure the average execution time of each query with 1, 2 and 3 database nodes we observed that the average response time doesn’t improve so much. In contrast the distribution of query response time is changing a lot. From 30% of requests that take less than 3 second with one node, we ended up with more than 50% of requests that take under 2 seconds when having 3 database nodes.

Read More
Posted in Azure | No comments

Sunday, 3 November 2013

VM and load balancer, direct server return, availability set and virtual network on Windows Azure

Posted on 07:44 by Unknown
In today post we will talk about how we can configure a load balancer, direct server return, availability set and virtual network when working with virtual machine on Windows Azure.
For virtual machines we cannot talk about load balancer without talking about endpoints. Endpoints give you the possibility to specify the port (endpoint) that can be public accessed. For each endpoint you need to specify the public port, private port and protocol (TCP,UDP). The private and public port can be very useful when you want to redirect specific calls to different ports. If you don’t specify the endpoints for a virtual machine, the machine cannot be accessed from outside (for example when you host a web page and you need to have port 80 open).  
For web and worker roles this load balancer is out of the box support. We could say that it is the same think for virtual machines also, but you need to make some custom configuration. You will need to specify that you will use the load balancer functionality when you create an endpoint. Once you create an endpoint that is load balanced, the only think that you will need is to specify to the second, 3rd … machine from that will be included in the load balancer the endpoint that use a load balancer set (“Add an endpoint to an existing load-balancer set”).
Direct server return is another feature of endpoints. It can be very useful when you have want each machine to response to the client directly, without the load balancer routing. In this way, once a client start to communicate with a machine, the connection will stick with only that virtual machine (stick connections) – useful when you have a DB on machines or you have an in-memory session that is not shared between machines.
Virtual network give you the possibility to create a private network formed from virtual machines from Windows Azure. When you need to share content between machines, this will be the best approach. Even if you have machines that are under the same subscription, you will not be able to access them using their private IP or name. You have almost the same access level from one machine to another as you would have machines under different subscription.
Because of this when you need to communicate between machines that are behind the same DNS name of public IP this will be the best approach. For example you will not be able to access a specific instance that is behind a load balancer to update content or call specific endpoints.
A virtual network can be created very easily from the Network tab. You have a lot of options from custom DNS names for specific machines to custom address spaces. Once a virtual network is created you can very easily create new virtual machines to it.
Remarks: You should take into account that you cannot add an existing virtual machine to a new virtual network. A fix for this problem is to stop and delete the machine without deleting the disk itself and recreate a new machine based on that disk
Not last, let’s talk a little about availability set. This is a pretty interesting setting. When you create two or more machines for the same purpose you don’t want the resources of the machine (memory, processor, and store) to be in the same rack. Why? If the rack will stop working that you will have both machines down and the load balancer will not be able to help you. In this case when you create a availability set you will have the guarantee that the machines under the same availability set will not be under the same rack. This will give you the availability of 99.95% that is offered by SLA.
The feature that I enjoy the most is the load balancer in combination with endpoints. It is a great feature that can help you a lot.
Read More
Posted in Azure, Cloud, Windows Azure | No comments

Friday, 1 November 2013

Bugs that cover each other

Posted on 00:48 by Unknown
This week I had the opportunity to work on a PoC that was pretty challenging. In a very short period of time we had to test our ideas and come with some results. When you need to do something like this, you have to:

  • Design
  • Implement
  • Test
  • Measure (Performance Test)

First two steps were pretty straight, we didn’t had any kind of problems. The testing of the current solution was pretty good, we find some small issues – small problems that were resolved easily.
We started the performance test, when we here hit by a strange behaviors. The database server had 3-4 minutes with a 100% load, after this period the load would go down to 0% for 5-6 minutes. This was like a cycle that used to repeat to infinity.
The load of the database should be 100% all the time… We looked over the backend server, everything looked okay. They were received requests from client bots and processed. Based on the load of the backend everything should be fine.
Next step was to look over the client bots machines. Based on the tracking information everything should be fine… But still we had a strange behaviors in the database, something was not right.
We started to take each part of the solution and debug it. We started with …SQL store procedures …backend and …client bots. When we looked over client bots we observed that we have a strange behaviors there. For each request we received at least to different responses.
After 1 hours of debugging we found out that we had 2 different bugs. The interesting part of this was that one of the bugs created a behaviors that masked the other bug. Because of this on the backend we had the impression that we have the expected behavior and the clients’ works well.
The second bug that was masked by the first one has big and pretty ugly.
In conclusion I would say that when even when you write a PoC and you don’t have enough time, try to test with one, two and 3 clients in parallel. We tested with one, 10 and 100 clients. Because of the logs flow for 10 and 100 clients we were not able to observe the strange behaviors before starting the performance testing.

Read More
Posted in | No comments

Wednesday, 30 October 2013

Events, Delegates, Actions, Tasks with Mutex

Posted on 13:25 by Unknown
Mutex. If you are a developer than you heard about mutex. Mutex are used to ensure that only one thread/process has an exclusive access to a specific sections of the code.
Let’s look over the following code and see if the code is correct or not?
public class Foo
{
Mutex mutex = new Mutex();
event SomeActionEvent someActionEvent;

public void Do()
{
mutex.WaitOne();
...
someActionEvent += OnSomeAction();
}

public void OnSomeAction(...)
{
mutex.ReleaseMutex();
}
}
“WaitOne” method is used to block the current thread until we received a release signal – WaitHandler. “ReleaseMutex” is used when we want to release the lock.
The above code release the mutex lock on an event handler that is executed on a different thread. The problem is that we don’t call the release mutex method from the same thread from where we call “WaitOne”. Because of this, the obtained behavior is not the one that we expect.
The same problem will appear if we use mutex in combination with delegates, lambda expression, Action/Func.
public class Foo
{
Mutex mutex = new Mutex();

public void Do()
{
mutex.WaitOne();
...
MyMethod(()=>
{
mutex.ReleaseMutex();
});
}
}
This is not all. We will find this problem when we are using Task. This is happening because each task is/can be executed on a differed thread.
public class Foo
{
Mutex mutex = new Mutex();

public void Do()
{
mutex.WaitOne();
...
Task.Factory.StartNew(()=>
{
...
mutex.ReleaseMutex();
});

}
}
There are different solutions for this problem from mutex with name to other ways of synchronization. What we should remember about anonymous mutex is that we need to call the release function from the same thread that made the lock.
Read More
Posted in | No comments

Tuesday, 29 October 2013

Digging through SignalR - Commands

Posted on 00:41 by Unknown
Looking over source code of SignalR. I found some interesting class and ways to implement different behaviors. In the next series of post I will share with you what I found interesting.
Before starting, you should know that SignalR is an open source project that can be accessed using GitHub.
In today post we will talk about command pattern. This patterns over the ability to define a “macro”/command that can be executed without knowing the caller. Commands can be handle in different ways, from create a queue of them to combining or offering support for redo/undo.
In SignalR library I found an implementation of command pattern that caught my attention.
internal interface ICommand
{
string DisplayName { get; }
string Help { get; }
string[] Names { get; }
void Execute(string[] args);
}
internal abstract class Command : ICommand
{
public Command(Action<string> info, Action<string> success, Action<string> warning, Action<string> error)
{
Info = info;
Success = success;
Warning = warning;
Error = error;
}

public abstract string DisplayName { get; }

public abstract string Help { get; }

public abstract string[] Names { get; }

public abstract void Execute(string[] args);

protected Action<string> Info { get; private set; }

protected Action<string> Success { get; private set; }

protected Action<string> Warning { get; private set; }

[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode", Justification = "May be used in future derivations.")]
protected Action<string> Error { get; private set; }
}
Why? The way how the handlers for actions like success, warning, info and error are transmitted. When creating the command, you need to specify them through the construct. In this way the developer will be forced to specify them. I think that this a great and simple way to specify them. If a developer don’t want to handle this actions, that he can transmit a null value for them. This solution is better than having one or more events.
Maybe it would be pretty interesting to wrap this 4 parameters in a simple class. In this way you could have all the similar actions under the same object. Beside this we would reduce the numbers of parameters of the Command class with 3.
internal class CommandCallbackActions
{
public CommandCallbackActions(Action<string> info, Action<string> success, Action<string> warning, Action<string> error)
{
Info = info;
Success = success;
Warning = warning;
Error = error;
}

protected Action<string> Info { get; private set; }

protected Action<string> Success { get; private set; }

protected Action<string> Warning { get; private set; }

[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode", Justification = "May be used in future derivations.")]
protected Action<string> Error { get; private set; }
}

internal abstract class Command : ICommand
{
public Command(CommandCallbackActions callbackActions)
{
CallbackActions = callbackActions;
}

public abstract string DisplayName { get; }

public abstract string Help { get; }

public abstract string[] Names { get; }

public abstract void Execute(string[] args);

public CommandCallbackActions CallbackActions { get; set; }
}
Another method that drew my attention was the “Execute” command. The command arguments are send through an array of string. This is a very and simple and robust way to send parameters. If this is enough for your application, than you should not change this to something more complicated. Otherwise you can replace the array of arguments with an interface (“ICommandArgs”). Each custom command can have his implementation of this interface. You should use this only if you really need, otherwise you will only make the project more complicated.

Read More
Posted in digging, signalR | No comments

Wednesday, 23 October 2013

Story - Developers that don’t understand the business of the application

Posted on 00:21 by Unknown
This days I participate to a presentation of a product. The idea of the product was great, I think that they will have a lot of success in the future.
Being a technical presentation, the presentation was held by one of the developers of the team. From this point of view this was great, because the audience was developers and other technical person. He could respond to a lot of question related to architecture, scalability, protocol that were used, base concept and so on.

Until….

Someone asked about something related to business idea and what kind of problem this application will solved….

In that moment our developer stopped……..

He could not give an answer….

And…he sad…
I’m a technical person, is not my business to know the business of the application!

In that moment I freeze. How the hell you can develop an application without understand the business? This is one of the biggest mistakes that can be done by a team member – refuse to understand the business of the application. Before thinking to a solution it is a have to understand the business of the application itself.
How you can resolve a problem without understanding the problem itself?
Being a monkey, a monkey with keyboard is great for a monkey. But when you need to solve a problem and give a solution, being a monkey with keyboard is not enough… This is how you end up with an airplane when a customers wanted a car.
Read More
Posted in | No comments

Monday, 21 October 2013

Custom Content Delivery Network Mechanism using Cloud (Azure)

Posted on 08:59 by Unknown
Requirements:
Lets’ imagine an application that need to deliver Excel report in different location of the world. Based on the client origin the application should be able to deliver specific Excel version. The application should be deployed in different location on the world.
The current CDN solution cannot be used because the master node needs to push the content to the slaves (CDN nodes) and the total report size for each slave will be over 20GB.
Non-Cloud solution:
If we would go on a non-cloud solution we should develop an application that is deployed on different location all around the world. Each application should be able to detect the source of the request and provide the specific Excel report. We should also develop a redirecting mechanism/ load balancing solution that is able to redirect the user to a specific node.
Cloud solution:
If we go on a cloud solution, based on Windows Azure we can imagine a master slave solution.
Each slave of our application will have the Excel reports for the country in his region. This slaves will be able to provide reports for the countries in his own region.
Having an application deployed on different data centers give us the possibility to use Traffic Manager. Traffic Manger is an out of the box mechanism that redirect a call to the closest data center where our application is deployed.
We can have our slaves deployed on different data centers around the globe. Each slave will have the reports for the countries that are served by him. When a request is coming from a country that is not served by the specific slave, the request will be redirected to the global slave, which has the reports for all the countries.
This slaves could detect if there to many requests that are coming for a country that is not mapped for their location and trigger an alert of the provisioning action for reports for that country.
Each slave will have an endpoint that will be used to resolve an Excel report request and a storage (blobs) that will be used to store the reports itself. Based on the client attributes, this service will return a URL with a Shared Access Signature (SAS) of a blob storage where the report is stored. Using SAS the access to the content will be controlled.
The solution will contains a master that will manage all the Excel reports from all the nodes. The master will be able to deploy new version of reports, delete the old one and so on from each slave node. Beside this, the master will the one that can trigger the provisioning of a slave with additional countries. The master will contains a storage (blob) with all the reports that exists and are valid and a service that is able to manage and maintain all the slave nodes.
When the provisioning is triggered, the download process to a specific slave should not be done by the master node. Because we can have a lot of slaves, this action consume a lot of resources and can give us a lot of problems. The master node should send a notification to a specific slave that a specific report is available for download/update/delete. In that moment the slave node should receive the notification and trigger the specific action. In this way we are able to move all the load from master to slaves.
The notification mechanism can be done over Service Bus. Each slave node will be represented by a different subscription. When the download/update/delete action is finished, the slave node can send a notification to the master node using a queue or a Service Bus Topic.

Things that I like to this solution:

  • Traffic Manager – Is able to automatically redirect request and when he detect a slave node is down redirect to the next slave
  • SAS – The content of the blob can be shared in a secure manner
  • Slave’s endpoint – If we have slaves that are hit by a lot of clients we can scale up the numbers of instances of that slave without affecting the rest of the slaves
  • Redundancy – When a slave is down all the request will be redirect to the closest slave
  • Report resolver – When a report cannot be resolve by a specific slave, the request is resolve by the global slave that is able not only to log this issues but also he can notify the master node about this incident. In this way the master node can trigger custom action like provisioning
  • Scalability – Each slave can scale independently based on the load
  • Provisioning mechanism – The provisioning is made using the slave processor resource. In this way the master node will not have peaks
  • Service Bus – Notifications from master to slave can be made using Service Bus Topics. In this way we can have one or more slaves register to the same countries
  • Download – The download itself will be made directly from Azure Storage. The load of the slaves itself will be minimal

We could have a similar approach and eliminate the endpoints from slave. Each slave could have only the storage part. This is a good solution when you the number of request that need to be handled is not very high. But when you have hundreds of request per minutes, that a solution like this is more suitable. The slave endpoints can be hosted on small instances.



Read More
Posted in Azure, Cloud, Windows Azure | No comments

CMS application for 3 different mobile platforms – Hybrid Mobile Apps + Mobile Services

Posted on 05:18 by Unknown
This days I attended to a session where an idea of a great mobile application was presented. From the technical perspective, the application was very simple, the content that is deliver to the consumer has a great value.

The proposal of the application was 3 different application for each platform (iOS, Android and Windows Phone). When I asked why it is so important to have 3 different native application for an application that has the main purpose to bring content to the users I found up that they need push notification support and all the rest of the content that will be displayed will be loaded from the server (CMS).
Having 3 different native application means that you will need to develop the same application 3 times for 3 different platform…. 3X maintenance 3X more bugs and so on….
In this moment I have a rendevu. I have the feeling that I already wrote about this.
For this kind of application you develop a great HTML5 application. For push notification support it is very easily to create 3 different native applications that has a WebBrowser controller. In this controller you can display the HTML content of your application. Only custom behaviors, like push notification will need to be develop for the specific platforms.
In this way, the applications will be very simple and the costs of developing and maintenance will be very low.
For push notification I recommend to use Mobile Services for Windows Azure. It is a great service that can be used to push a notification to all devices and platforms with minimal development costs.

Read More
Posted in Azure, Cloud, Windows Azure | No comments

Tuesday, 15 October 2013

How to track who is accessing your blob content

Posted on 01:24 by Unknown
In this post we’ll talk about how we can monitor our blobs from Windows Azure.
When hosting content on a storage one of the most commune request that is coming from clients is

  • How I can monitor each request that is coming to my storage?
  • For them it is very important to know 
  • Who downloaded the content?
  • When the content was downloaded?
  • How the request ended (with success)?

I saw different solution for this problem. Usually the solutions involve services that are called by the client after the download ends. It is not nothing wrong to have a service for confirmation, but you add another service that you need to maintain. Also it will be pretty hard to identify why a specific device cannot download your content.
In this moment, Windows Azure has a build-in feature that give us the possibility to track all the requests that are made to the storage. In this way you will be able to provide all the information related to the download process.
Monitor
In Windows Azure portal you will need to go to your storage and navigate to the Monitoring section. For blobs you will need to set the monitoring level to Verbose. Having the monitoring level to Verbose all metrics related to your storage will be persisted. The main different between Minimal and Verbose is the level of monitoring.
This data can be persisted from 1 day to 1 year. Based on your needs and how often you collect the data you can set the best value that suites you. If your storage is used very often I recommend to set maximum 7 days. You can define a simple process that extract monitor information from the last 7 days, store it in a different location and analyze it using your own rules. For example you may way want to raise an alert to your admins if a request coming from the same source failed for more than 10 times.
This table contains all the monitoring information for your storage. In this moment we don’t have support to write monitoring data for a specific blob to a specific table, but we can make query over this table and select only the information that we need.
All the information related to this will be persisted in Azure table from your storage named ‘$MetricsCapacityBlob’.
Logging
The feature that we really need is logging. Using logging functionality we will be able to trace all request history. The activation of this feature can be done from the portal, under the logging section. You can activate logging for the main 3 operations that can be made over a blob: Read/Write/Delete.
All this data is stored under the $logs container:
https://<accountname>.blob.core.windows.net/$logs/blob/YYYY/MM/DD/hhm/counter.log
Everything that we can imagine can be found in this table:

  • Successful and failed requests with or without Shared Access Signature
  • Server errors
  • Timeouts errors
  • Authorization, network, throttling errors
  • ...

Each entity from the table contains helpful information like:

  • LogType (write, read, delete)
  • StartTime 
  • EndTime
  • LogVersion (for future, in this moment we have only one version – 1.0)
  • Request URL
  • Client IP
  • …

The most useful information for is usually found under the ‘Client IP’ and ‘Request URL’. Maybe you ask yourself why we have a start time and an end time. This can be very useful for a read request for example. In this way we will be able to know how long the download process took.

I invite you to explore this feature when you need to track the clients that access your blob resources.
Read More
Posted in Azure, Cloud, Windows Azure | No comments

Friday, 11 October 2013

Certificates and resource management

Posted on 20:57 by Unknown
Last month I had two posts where I tacked a little about certificates in .NET (1, 2).
In this post I want to discuss about resource management when X5099Certificate2 is used. This is applicable for X509Certificate also.
When we use a .NET object we usually check if IDisposable interface is implemented. If yes, than we will call ‘Dispose’ method. But unfortunately X509Certificate don’t implement this interface.
Behind this class there is a handle (SafeCertContextHandle) to unmanaged resources. Because of this, each new instance of the certificate will have a handle to unmanaged resources. If you need to process 2.000 certificates from the store it will be very easy for you to get a wonderful “OutOfMemoryException”.
To be able to release all the resources that are used by a reference to a certificate you will need to call the ‘Reset’ method. This method will release all the resources that are associated with the specific certificate.
X509Certificate2 cert = …
…
cert.Reset();
X509Store has the same story and you will need to call ‘Close’ method.
X509Store store = …
…
store.Close();
The ‘Close’ method is pretty simple to be seen when you look over the methods that are available. The name gives you an hint that there are some resources that needs to be released, but the ‘Reset’ method name don’t help you too much with this. Because of this it is very simple to end up with an “OutOfMemoryException”.
I don’t understand why X509Certificate2 doesn’t implement IDisposable interface. The resource-free idiom should be apply also in this case also.

Read More
Posted in Certificate | No comments

Feature of a Smart Devices that I miss

Posted on 04:04 by Unknown
Nowadays, everyone has a smart device like a phone, a tablet, a watch or a camera. This devices give us the power to stay connected with the world in real time and to access or information extremely fast.
All this devices has different security mechanism, which allow us to track them, make them ring and erase all the data.
But once the device is stolen and all the data are deleted, the “new” owner can register and use the device without any kind of problem. From that point, it is pretty complicated to track our device and recover it.
A feature that I miss on this new smart devices is the ability to track and block them after someone reset it. Based on the unique ID of each devices we should be able to track them even if a factory reset is made.
I would like to see a feature for devices that were already register online to request a confirmation from the old owner when the device is register with another account.
Imagine yourself that you buy a device and someone steal it. He will not be able to use it from that moment. The smart device will become only a brick. I don’t have anything against to restrict use of device (internet and online access) after the moment when the device is marked as stolen. Even with a factory reset I would expect this behavior to remain.
In this way, people will not be able to use devices that were purchased from the “black market”.
Read More
Posted in | No comments

Thursday, 3 October 2013

How to assign a private key to a certificate (from command line or code)

Posted on 02:39 by Unknown
In the last post we saw how we can extract certificate from a PKCS7 store. In this post we will see two different ways how we can register a certificate into local store using C#.
The first method that can be used to register a certificate into the store is using X509Store class.
X509Certificate2 cert = new X509Certificate2(...)
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadWrite);
store.Add(cert);
store.Close();
If you want to import the private key to the certificate you will need to need to specify to the certificate constructor the flag that say to the system “Please, persist the private key".
X509Certificate2 cert = new X509Certificate2 (
[path],
[password],
X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet);
X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadWrite);
store.Add(cert);
store.Close();
This solution will works in 99.9% cases, but I had the luck to discover that there are some certificates that don’t accept the import of private key using C# code. I didn’t understood why the private key is not imported, but it seems that there are cases when the private key is not persisted. In this situations, the only solution that I found is to make a step backward and use command line commands. You will need to use two different commands.
 The first command install the certificate in the local store:
certutil -user -privatekey -addstore my [certificatePath]
The second command will “repair” the certificate, basically the private key is added to the certificate.
certutil -user -repairstore my "[certThumbprint]"
To be able to use the second command you will need to specify the thumbprint of the certificate. This can be obtain very easily from the store (X509Store). Each command like command can run from C# using “Process.Start(command,paramas);”.
The most important command, when you need to assign a private key to a certificate that don’t has a private key is the second one.
Good luck!
Read More
Posted in | No comments

Wednesday, 2 October 2013

How to get a certificate from a PKCS7 file

Posted on 02:14 by Unknown
In the last period of time I was forced to play a lot with certificates from .NET code (in my case was C#). The good part when you are using .NET is that you have an API that can be used to manage system certificates.
It is pretty simple to load a certificate from certificates store, file or an array of bytes.
X509Certificate2 cert = 
new X509Certificate2(bytes);
In the above example I am loading a certificate from an array of bytes. The code is pretty simple and works great.
There are situations when you receive a certificate from an external system. In this cases the first step that you need to do is to save the certificate local. When you are a 3rd party and you are receiving a certificate from a server you should be aware of the format of the certificate.
In my case I received a certificate signed in PKCS7 byte array without knowing this and I assumed that it is enough to load the certificate using the constructor. The funny thing is that you will be able to load a signed PKCS7 file using the certificate constructor without any kind of problems, but you will not load the certificate that you expect.
This is happening because PKCS7 signed filed can have more than one certificate and X509Certificate2 constructor will load the certificate that was used to signed the store rather than certificates that can be found in the rawData of PKCS7.
To be able to access and load a certificate from the rawData of PKCS7 file you will need to use SignedCms. This class give you the possibility to access a message in PKCS7 format. This is the best and simple way to access all the certificates from a signed file.
SignedCms cms = new SignedCms();
cms.Decode(bytes);
x509certificate2Collection certs = cms.Certificates;
The Certificates property is a X509CertificateCollection that contains all the certificates from the signed file.

Read More
Posted in | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Service Bus Topic - Automatic forward messages from a subscription to a topic
    Windows Azure Service Bus Topic is a service that enables us to distribute the same messages to different consumers without having to know e...
  • CDN is not the only solution to improve the page speed - Reverse Caching Proxy
    I heard more and more often think like this: “If your website is to slow, you should use a CDN.” Great, CDN is THE solution for any kind of ...
  • Content Types - Level 6: Rich Media
    Level 6: Rich Media NOTE: This is part 7 of 7 and the conclusion of this continuing series; please see earlier posts for more background inf...
  • Publishing our CellCast Widget for iPad
    The rush has been on this week as our development team worked to design a new version of our CellCast Widget specifically for Apple's up...
  • Patterns in Windows Azure Service Bus - Message Splitter Pattern
    In one of my post about Service Bus Topics from Windows Azure I told you that I will write about a post that describe how we can design an a...
  • E-Learning Vendors Attempt to Morph Mobile
    The sign should read: " Don't touch! Wet Paint !" I had a good chuckle today after receiving my latest emailed copy of the eLe...
  • SQL - UNION and UNION ALL
    I think that all of us used until now UNION in a SQLstatement. Using this operator we can combine the result of 2 queries. For example we wa...
  • Cum sa salvezi un stream direct intr-un fisier
    Cred ca este a 2-a oara când întâlnesc aceasta cerința in decurs de câteva săptămâni. Se da un stream și o locație unde trebuie salvat, se c...
  • Task.Yield(...), Task.Delay(...)
    I think that a lot of person already heard about these new methods. In this post I want to clarify some things about these new methods that ...
  • Content Types - Level 4: Reference
    Level 4: Reference Materials & Static Content NOTE: This is part 5 of 7 in a continuing series; please see earlier posts for more backgr...

Categories

  • .NET
  • .NET nice to have
  • #if DEBUG
  • 15 iunie 2011
  • 15 octombrie 2011
  • 2011
  • abstracta
  • action
  • adaugare
  • ajax
  • Amsterdam
  • Android
  • aplicatii
  • App Fabric
  • Apple iSlate
  • array
  • as
  • ASP.NET
  • AsReadOnly
  • Assembly comun
  • async
  • Asynchronous programming
  • asyncron
  • Autofac
  • AutoMapper
  • az
  • Azure
  • Azure AppFabric Cache
  • Azure backup solution
  • Azure Storage Explorer
  • azure. cloud
  • backup
  • BCP utility
  • bing maps v7
  • BitArray
  • BlackBerry
  • blob
  • BlobContainerPublicAccessType
  • breakpoint
  • bucuresti
  • C#
  • cache
  • CallerMemberName
  • CellCast
  • Certificate
  • CES
  • change
  • ChannelFactory
  • clasa
  • classinitialize
  • clean code
  • click event
  • close
  • Cloud
  • Cluj
  • cluj-napoca
  • Code contracts
  • code retrat
  • codecamp
  • CollectionAssert
  • Compact Edition
  • compara
  • Comparer T .Default
  • CompareTo
  • comparison
  • comunitate
  • concurs
  • Conditional attribute
  • configurare
  • connection string
  • container
  • content type
  • control
  • Convert
  • convertAll
  • convertor
  • cross platform
  • CRUD
  • css
  • custom properties
  • custom request
  • DACPAC
  • Daniel Andres
  • data sync service
  • database
  • date time
  • datetime
  • debug
  • default
  • delegate
  • dependency injection
  • deploy
  • DeploymentItem
  • design patterns
  • Dev de Amsterdam
  • development stoage
  • dictionary
  • diferente
  • digging
  • director
  • Directory.Exist
  • disable
  • dispatcher
  • dispose
  • dropdown
  • dynamic
  • EF
  • email
  • encoding
  • entity framework
  • enum
  • enumerable
  • Environment.NewLine
  • error
  • error 404
  • error handling
  • eveniment
  • event
  • ews
  • excel
  • exception
  • exchange
  • exita
  • explicit
  • export
  • extension
  • field
  • File.Exist
  • finalize
  • fire and forget
  • Fluent interface pattern
  • format
  • func
  • GC.SuppressFinalize
  • generic
  • getdirectoryname
  • globalization
  • gmail
  • hackathon
  • Hadoop
  • handle
  • HTML
  • html 5
  • Html.ActionLink
  • http://www.blogger.com/img/blank.gif
  • HttpModule
  • IComparable
  • IE
  • ienumerable
  • IIS
  • image
  • implicit
  • import
  • int
  • internationalization
  • Internet Explorer
  • interop
  • Ioc
  • IP Filter
  • iPhone
  • iQuest
  • IStructuralEquatable
  • ITCamp
  • itspark
  • java script
  • javascript
  • July 2012
  • KeyedByTypeCollection
  • KeyNotFoundException
  • Kinect SDK
  • lambda expression
  • LightSwitch Microsoft Silverlight
  • linq
  • list
  • lista
  • lista servicii
  • liste
  • Live Connect
  • Live ID
  • load
  • localization
  • lock
  • m-learning
  • MAC
  • Mango
  • map
  • mapare
  • mapare propietati
  • messagequeue
  • meta properties
  • method
  • MethodImpl
  • Metro App
  • Microsoft
  • Microsoft Sync Framework
  • mlearning
  • mlearning devices
  • Mobile Apps
  • mobile in the cloud
  • mobile learning
  • mobile services
  • Mobile Web
  • mongoDb
  • monitorizare
  • msmq
  • multitasking
  • MVC
  • MVC 3
  • MVVM
  • namespace
  • nextpartitionkey
  • nextrowkey
  • Ninject
  • nivel acces
  • no result
  • normalize
  • nosql
  • null expcetion
  • null object pattern
  • NullReferenceException
  • OAuth API
  • office
  • offline
  • Open ID
  • openhackeu2011
  • operations
  • operator
  • optimization
  • option
  • outputcache
  • OutputCacheProvider
  • override
  • paginare
  • pagination
  • path
  • persistare
  • Portable Library tool
  • Post event – CodeCamp Cluj-Napoca
  • predicate
  • predictions
  • prezentare
  • process
  • proiect
  • property
  • propietati
  • query
  • ReadOnlyCollection
  • ReadOnlyDictionary
  • referinta
  • reflection
  • remote
  • reply command
  • request
  • request response
  • resouce
  • REST
  • REST Client
  • RESTSharp
  • ronua
  • rss
  • rulare
  • salvare in fisier
  • sc
  • schimbare timp
  • select
  • select nodes
  • send
  • serializare
  • serialization
  • Server.Transfer. Resposen.Redirect
  • service bus
  • ServiceBase
  • servicecontroller
  • sesiune
  • session
  • Session_End
  • Session_Start
  • setup
  • Sibiu
  • signalR
  • Silverlight
  • sincronizare
  • Single Responsibility Principle
  • SkyDrive
  • skype
  • smartphones
  • smtp
  • Snapguide
  • sniffer
  • socket
  • solid
  • spec#
  • sql
  • Sql Azure
  • SQL CE
  • sql server 2008 RC
  • SRP
  • startuptype
  • stateful
  • stateless
  • static
  • stergere
  • store
  • store procedure
  • stream
  • string
  • string.join
  • struct
  • StructuralEqualityComparer
  • submit
  • switch
  • Symbian
  • Synchronized
  • system
  • tabele
  • table
  • techEd 2012
  • tempdata
  • test
  • testcleanup
  • testinitialize
  • testmethod
  • thread
  • timer
  • ToLower
  • tool
  • tostring
  • Total Cost Calculator
  • trace ASP.NET
  • transcoding
  • tuplu
  • tutorial
  • TWmLearning
  • type
  • unit test
  • unittest
  • UrlParameter.Optional
  • Validate
  • validation
  • verificare
  • video
  • view
  • ViewBag
  • virtual
  • visual studio
  • VM role
  • Vunvulea Radu
  • wallpaper
  • WCF
  • WebBrower
  • WebRequest
  • where clause
  • Windows
  • windows 8
  • Windows Azure
  • Windows Azure Service Management CmdLets
  • windows live messenger
  • Windows Mobile
  • Windows Phone
  • windows service
  • windows store application
  • Windows Task
  • WinRT
  • word
  • workaround
  • XBox
  • xml
  • xmlns
  • XNA
  • xpath
  • YMesseger
  • Yonder
  • Zip

Blog Archive

  • ▼  2013 (139)
    • ▼  November (17)
      • Service Bus - Optimize consumers using prefetch an...
      • Extract relative Uri using MakeRelativeUri method
      • Sync Group - Let's talk about Performance
      • [PostEvent] Slides from MSSummit 2013, Bucharest
      • How to get the instance index of a web role or wor...
      • Sync Group - A good solution to synchronize SQL Da...
      • Throttling and Availability over Windows Azure Ser...
      • How to monitor clients that access your blob storage?
      • Digging through SignalR - Dependency Resolver
      • [Event] Global Day of Coderetreat in Cluj-Napoca! ...
      • Windows Azure Service Bus - What ports are used
      • Debugging in production
      • Simple load balancer for SQL Server Database
      • [PostEvent] MSSummit 2013, Bucharest
      • How to read response time when you run a performan...
      • VM and load balancer, direct server return, availa...
      • Bugs that cover each other
    • ►  October (12)
      • Events, Delegates, Actions, Tasks with Mutex
      • Digging through SignalR - Commands
      • Story - Developers that don’t understand the busin...
      • Custom Content Delivery Network Mechanism using Cl...
      • CMS application for 3 different mobile platforms –...
      • How to track who is accessing your blob content
      • Certificates and resource management
      • Feature of a Smart Devices that I miss
      • How to assign a private key to a certificate (from...
      • How to get a certificate from a PKCS7 file
    • ►  September (10)
    • ►  August (7)
    • ►  July (8)
    • ►  June (15)
    • ►  May (12)
    • ►  April (17)
    • ►  March (16)
    • ►  February (9)
    • ►  January (16)
  • ►  2012 (251)
    • ►  December (9)
    • ►  November (19)
    • ►  October (26)
    • ►  September (13)
    • ►  August (35)
    • ►  July (28)
    • ►  June (27)
    • ►  May (24)
    • ►  April (18)
    • ►  March (17)
    • ►  February (20)
    • ►  January (15)
  • ►  2011 (127)
    • ►  December (11)
    • ►  November (20)
    • ►  October (8)
    • ►  September (8)
    • ►  August (8)
    • ►  July (10)
    • ►  June (5)
    • ►  May (8)
    • ►  April (9)
    • ►  March (14)
    • ►  February (20)
    • ►  January (6)
  • ►  2010 (26)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (4)
    • ►  March (1)
    • ►  February (1)
    • ►  January (14)
Powered by Blogger.

About Me

Unknown
View my complete profile