Windows Mobile Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Sunday, 30 June 2013

TechEd 2013 - Day 4

Posted on 22:38 by Unknown
Last day of TechEd 2013 ended, goodbye Spain. This day ended with a lot of interesting sessions related to Windows 8, Azure and Big data. I discover new ways how we can use Bitlocker for enterprise customers and why different 3rd party prefer to use Bitlocker and give up of their own mechanism.
I think that you heard that the preview version of Windows 8.1 can be downloaded from Microsoft news. Be aware, if you install the preview version of Windows 8.1, moving to the final version of 8.1 will not be so easily. When you go to the preview version, don’t let the installer to update the backup partition also. In this way it will be very easily to go back to 8.0 and install the final version of Windows 8.1. In general I prefer to use install the preview version of any software o virtual machines.
While attending to a session about big data and Hadoop I find out how you can increase the performance of a reduce operation. During the reduce operation it is recommended to have as few as possible. Even if the size of the files is big, it is not important. Having only a few number of files at this step will make the reduce operation faster.
“ASVMITI” is the storage mechanism offered by Microsoft. One of the benefits of this storage is the persistence. The SLA that is offered to the clients will offer us the data persistence and access of all information, even if we will stop or delete our HD. This is great, because after the process of data ends, we will be able to access all this data. Don’t forget that storing data into cloud is very cheap. As input, we can use any kind of data, even archives of type gzip, bz2 or deflate.
The last 5 posts were only a small summary of things that were presents at TechEd Europe. A lot of interesting news and information was shared at TechEd. All the sessions can be viewed on Channel 9.
Read More
Posted in | No comments

Thursday, 27 June 2013

TechEd 2013 -Day 3

Posted on 23:08 by Unknown
The 3rd day of TechEd had finished. At the keynote I had the opportunity to see a demo of an application for Windows Store for which I write a part of the code a design it. Also I discover that over 66% of the companies has the bring your own device policy active. For them, Windows 8 and Windows Phone 8 came with special features that secure and protect their private data. Because of this, companies like SAP will have all their applications for sale force migrated to Windows 8 (until end of 2015).
From the developer perspective, we should try the new  Visual Studio 2013. It has great features especially for load test. Even if in the preview it is pretty stable. Also, if you are starting to develop a client-server application, that will communicate over http/s than… go with  Web.API. Why? Because Microsoft invest and will invest a lot of resources in that area. WCF is good for TCP/IP and other protocols, but for HTTP/S they recommend is Web.API.
The last presentation of the day was very intertying. It started at 5 PM, in the same time with another presentation from the Build, where they announce some new services of Windows Azure. Auto-scaling is not build-in and supported by Azure. From now on we can define the rules directly on Window Azure portal – this is so COOL :D. This feature combined with alerts and notification services will be very useful for the maintenance team. Great job!
That all for today!
Read More
Posted in | No comments

TechEd 2013 - Day 2

Posted on 01:17 by Unknown
Second day of TechEd 2013 has ended. For me, this day was full off interesting information related to SQL Server and Windows Sever. From my perspective, the most interesting sessions of the day where related to security. During this sessions I realize that we are extremely vulnerable to attaches, even if you change the server password every 4 hours. An attack can be made in 5 seconds – the same problem is on the Linux system, not only Windows.
Things that I consider interesting:

  • Foca – it is an interesting tool to discover what the public content that is listed by an internet endpoint. It will extract meta-information like the name of the users, machines, software version and so on. 
  • It is extremely easy to modify a worm or trojan to make it undetectable and for 100$ you can buy an application that give you the ability to “manage” the infected machine.   
  • You should NEVER have an async method that return void, except when you are working with event handler. If an exception occurs, the exception cannot be catch at the code level. Because of this you will have odd behaviors from time to time like: in Windows Phone this exception will disappear and in Windows 8 this error will cause the application to “die” immediately, without a notifications. Another problem with this kind of calls is when the actions end – because the system cannot tell you when the call end. In this cases you should return Task and not void.
  • If you want to hide the main menu from Visual Studio, that you could try Hide Main extension. Pressing ALT will show the menu back again.
  • When you have Visual Studio open, than ALT + F6 can help you to navigate between Visual Studio windos.
  • Don’t forget SHIFT+ALT will give you the ability to make multi-line editing.
  • If you are using SignalR and you need to create a small cluster and synchronize different servers with SignalR than Windows Azure Service Bus can be a solution for you. You need to write only one line of code to have Service Bus in the backplane.

That’s all for today, see you tomorrow.
Read More
Posted in | No comments

Tuesday, 25 June 2013

TechEd 2013 - Day 1

Posted on 14:43 by Unknown

First day of TechEd Europe 2013 has just ended. What can I say? Wow. A lot of interesting sessions about Microsoft technology. I participate to a lot of session on different topics like big data, legacy code, loading and performance testing into the cloud and how we can increase the performance of our application.
Let’s see some ideas that I noted and I want to share with you:

  • Creating a virtual network with the machines that we have into the cloud we can make remote debugging very easily from Visual Studio.
  • With the new version of SQL Server and having in-memory database we can increase our performance with a factor of 6x. If we recreate the store procedures the new SQL Server will create binary code for the store procedure (as DLL) – we can gain up to 26x better performance.
  • A testing and reviewed code will contains up to 70% less bugs.
  • PerfView is a great tool, I need to check it out.

If you deploy a VM or a web site to Azure you can win a super car (Microsoft content – I hope that they will pay the insurance also).
See you tomorrow.
Read More
Posted in | No comments

Monday, 24 June 2013

TechEd 2013 - Pre-Conference Day

Posted on 13:34 by Unknown
The pre-conference day of TechEd has finished. It was a full day with interesting seminars. For me, this morning was pretty hard to decide at what seminar I want to participate. I wasn’t decide what seminar to choose between “Enterprise Agility is Not an Oxymoron” and “Extending Your Apps and Infrastructure into the Cloud”.
In the end I decided to go to the second presentation, where I found out a lot of cool information related to this topic. In the next part of the post I will enumerate some interesting information that I discovered:

  • In this moment, most applications that are hosted on Windows Azure are ON AND OFF or PREDICTABLE BURSTING. To tell you the truth, I didn’t expected the ON AND OFF apps to be on top. 
  • Over 98% of organizations use and have virtualization. This is a huge opportunity for cloud providers.
  • Be aware of the SLA. If the provider cannot offer the uptime from SLA you will receive the money back for the downtime, but you will not recover the money that you lost during that period of time when your app or service is down.
  • You can find a SQL Server VM image on VM library that can be used on the cloud without having to buy your own license for SQL Server.
  • For VM you pay per minute and not per hour. First 5 minutes are free :-).

See you tomorrow night.
Read More
Posted in | No comments

Sunday, 23 June 2013

WPF - Binding of TextBox in a Grid

Posted on 12:58 by Unknown
Today we will talk about WPF and how we improve the loading time of our window. One of my colleague started to have a performance problem with an application written in WPF. The application has to run on some virtual machines with Windows and .NET 3.5
The application has a window with a grid that contains 200 cells. The model behind the view has an object with 50 property. I know we have too many properties, but this is the model, we need to optimize the loading time of the page and not to make a review.
public class MyModel : INotifyPropertyChanged
{
public Foo Obj1 { get; set ...}
}
public class FooModel : INotifyPropertyChanged
{
public string P1 { get; set ... }
public string P2 { get; set ... }
...
public string P50 { get; set ... }
}
This model is bind to a grid that contains 4 columns and 50 rows. Each property is bind 4 times to the cell controller. The controller of the cell is a simple TextBox (this is a sample code).  The grid is similar to this:
                   <Grid>
<Grid.RowDefinitions> … </Grid.RowDefinitions>
<Grid Grid.Row="0" Grid.Margin="10">
…
<TextBox Text="{Binding Path=DataObject.P11}" Grid.Row="1" Grid.Column="1"></TextBox>
<TextBox Text="{Binding Path=DataObject.P12}" Grid.Row="2" Grid.Column="1"></TextBox>
…
This is a simple binding, but on a virtual machine, under .NET 3.5, the loading time of this window is around 5-6 seconds. We need to find out what is the cause of this problem. After creating a sample project with this problem we discovered the cause of the problem.
It seems that the binding itself is very slow. If we compile the sample project with .NET 4.0 the loading time decreases a lot. But we cannot migrate to .NET 4.0. This is not an option now.
We could try to bind the grid to our model and each cell would be bind directly to the property.
<Grid Source= “Binding Path = DataObject”>
…
<TextBox Text="{Binding P1}" Grid.Row="1" Grid.Column="1"></TextBox>
With this solution the performance is improved with ~1 second. We still have a problem, the windows is loading too slowly.
What we can do? The cause of the problem is the TextBox. The TextBox binding consumes a lot of resources.
Looking over the view we can replace the TextBox with a TextBlock. Running the application again and voila. The loading time of the page is under 1 second.  Before applying this solution we need to check if we need to have the text editable. The good news is NO DON’T NEED that option.
In our case, replacing the TextBox with TextBlock improved the loading time of the page with more then 5 seconds. If we need to have the text editable, then we could register to the click event and replace the TextBlock with a TextBox. To have the same UI, we added a border to the TextBlock also.
The things to remember from this story is to use the most simple and UI base controllers we need. We shouldn’t use a complicated UI controller if don’t need it. The more complicated they are, the more resources will need.
Read More
Posted in | No comments

Friday, 21 June 2013

Topic Isolation - (Part 6) Testing the limits of Windows Azure Service Bus

Posted on 21:48 by Unknown
Some time ago I wrote about how I manage to process millions of messages over Windows Azure Service Bus. I discover that processing millions of messages every hour will increase the latency and response time of our topic.
I wanted to find out what I happening with the topics from the same account and data center when the load of one of them increase drastically. 
To be able to make this tests and discover the answer to my questions I made the following setup:
  • Create 3 topics on the same data center. Two on the same account and another one on a different account.
  • Create a worker role that will read messages from topics (via subscription) and monitor the delay time. The results will be written to Windows Azure Tables.
  • Create 8 worker roles that will push hundreds of thousands of messages in a very short period of time on the same topic (multi-threading rules).
  • Create 4 worker roles that will consume messages from our topic (via subscription).
In the end we end up with 12 worker roles that hit the same topic. Of course the latency of that topic increases, but we wanted to see if the rest of the topics are affected. I run this performance test for around 6 hours and …
The results of the test were extremely good. The second and 3rd topic where not affected by this load test. The latency remain the same for this 2 topics during the load test.
I’m very happy with this results. This is a confirmation that we will have a good performance on all of our topics. The topics are 100% isolated are not affected by the rest of out topics.
Read More
Posted in Azure, Cloud, service bus, Windows Azure | No comments

Tuesday, 18 June 2013

Coding Stories VI - Inconsistent property naming

Posted on 23:15 by Unknown
Looking over a code I found something similar to this:
public class Ford
{
...
public int Power { get; set; }
}

public class Dacia
{
...
public int Power { get; set; }
}

public class FordModel
{
...
public int CarForce { get; set; }
}

public class DaciaModel
{
...
public int CarForce { get; set; }
}
From the naming perspective, we have two different names of the properties that has the same naming. Because of this we can be misunderstood and developer will need to look twice to understand what the hall is there.
public class Ford
{
...
public int Power { get; set; }
}

public class Dacia
{
...
public int Power { get; set; }
}

public class FordModel
{
...
public int CarPower { get; set; }
}

public class DaciaModel
{
...
public int CarPower { get; set; }
}
Now is better, but there is still a problem. Looking over the Ford and Dacia class we notice that there is a common property that define a Car. In this case we should have an interface or a base class with the items that are common.
public abstract class Car
{
...
public int Power { get; set; }
}

public class Ford : Car
{
...
}

public class Dacia : Car
{
...
}
I would go on an interface that define the power concept of a car and on an abstract class that add this on the Car definition.
public interface IPower
{
int Power { get; }
}

public abstract class Car : IPower
{
...
public int Power { get; set; }
}

public class Ford : Car
{
...
}

public class Dacia : Car
{
...
}
In this way we will be able to specify what are the items that have the power attribute.
Read More
Posted in | No comments

Thursday, 13 June 2013

Coding Stories V - Serialization with XmlArrayItem

Posted on 05:48 by Unknown
Serialization, almost all applications need this feature. Nowadays serialization can be in different formats. The trend now is to use JSON, but there are many applications that uses XML.
Let’s look over the following code:
public class Car
{
public string Id { get; set; }
}
public class City
{
[XmlArray("Cs")]
[XmlArrayItem("C")]
public List<Car> RegisterCars { get; set; }
}
...
XmlSerializer serializer = new XmlSerializer(typeof(City));
serializer.Serialize(writer, city);
Output:
<city>
<Cs>
<c>
<id>1</id>
<c>
<c>
<id>2</id>
<c>
<Cs>
</city>
Even if the code compiles, works perfectly, there is a small thing that can affect us. Because we use XmlArrayItem attribute, each node from the list will be named “C”. If we will need to deserialize only a C node then we will have a surprise.
This cannot be done with the default XmlSerializer class.
XmlSerializer serializer = new XmlSerializer(typeof(Car));
serializer.Deserialize("<c><id>1</id></c>");
This will expect a node named “Car”, that cannot be found there.
Because of this, when we need to control the name of the nodes from a list and I recommend to not use the XmlArrayItem. A better approach is with [XmlRoot(ElementName = "c")] on the Car class. Using this approach, we will be able to deserialize a child node of the list without having to deserialize all the list.
The final code will look like this:
[XmlRoot(ElementName = "c")]
public class Car
{
public string Id { get; set; }
}
public class City
{
[XmlArray("Cs")]
public List<Car> RegisterCars { get; set; }
}
Even if this is a small thing, on a big project this can affect how different components deserialize the content. Deciding how items need to be serialized is an important step that needs to be done from the beginning.
Read More
Posted in | No comments

Monday, 10 June 2013

MapReduce on Hadoop - Big Data in Action

Posted on 23:07 by Unknown
In the previous post we’ve discovered what the secret of Hadoop is when it needs to store hundredths of TB. Based on a simple master-slave architecture, Hadoop is a system that can store and manipulate big amount of data very easily.
Hadoop contains two types of nodes for storing data. The NameNode is the node that plays the role of master. It knows the name and locations of each file that Hadoop stores.  It is the only node that can identify the location of a file based on the file name. Around this node we can have 1 to n nodes that store the files content. The name of this kind of nodes is DataNode.
Data processing
Hadoop stores big data without any kind of problems. But it became known as the system that can process big data in a simple, fast and stable way. It is a system that can process and extract the information that we want from hundredths of TB of data. This is why Hadoop is the king of big data. In this post we will discover the secret of data processing – how Hadoop manage to do this.
The secret that give us the ability to process data is called Hadoop. This paradigm it was not invented by Hadoop, but Hadoop managed to implement it very good. The first meeting with MapReduce will be hard for us. It will be pretty complicated to understand it. Each person that wants to use MapReduce needs to understand first the MapReduce paradigm.
Without understanding MapReduce we will not be able to know if Hadoop is the solution for our problem and what kind of data we should expect from Hadoop.
MapReduce and Tuples
Don’t expect Hadoop to be a system that stores data on tables. This system doesn’t have the concept of tables. It only works with tuples that are formed by a key and a value. This is the only thing that Hadoop uses to extract data. Each task that is executed in this system will accept as input this tuples. Of course the output of a task will be formed by (key, values) pairs. Each pair can contain one or more values.
Even if this tuple seems to be trivial, we will see that this is the only thing that we need if we want to process data.
Map
The MapReduce process is formed from two differents steps – Map and Reduce. The Map is the process used to convert the input data into a new set of data. The data that will be obtained after this step is only intermediate data that will be used in the next step. We have the option to persist this data, but generally this information is not relevant for the end user.
The Map action is not executed on only one node. This action is executed on 1 to m nodes of DataNode type. Each DataNode on which this action is executed will contain the input data – because of this on each node we execute the Map over a part of input data. The result size of this action is smaller than the input data. This data can be processed more easily. At this step we have the result in the memory. The result is not written to the disk.
We can image that the output of this step is like a summary of our data. Based on the input and how we want to map the input data we will obtain different results. At this step, the output data doesn’t need to have the same format as the input data. The result is partitioned based on the function that uses the key of the tuple. In general a hash function is applied, but we can define any kind of partitioning mechanism.
The intermediate result can be used by Hadoop for different operations. At this step we can execute actions like sorting or shuffle. This small steps can prepare the data for the next step. This operations can and are executed also after the Reduce step.
From the parallelize point of view, on each node where the Map reduce is executed we can have from 10 to 100-150 operations in the same time. The number of concurrent operations is dictated by the hardware performance and the complexity of the Map action.
Reduce
Once we have the intermediate results, we can start the next step of processing – Reduce. In comparison with the Map operation, the Reduce step operation cannot be executed on each node of Hadoop. This operation will be executed on only a small part of the nodes. This is happening because the size of data that we need to process was already reduced. Each data is partitioned for each Reducer.
If the Map reduce was formed by only one step, we will see that Reduce contains 3 main steps:

  • Shuffle
  • Sort
  • Reduce

In the moment when the Shuffle step is executed, each DataNode that was involved in the Map operation starts to send the results to the nodes that will run the Reduce operation. The data is send over an HTTP connection. Because Hadoop runs in a private network, we don’t have any kind of security problems.
All the key value pairs are send and sorted based on the key. This needs to be done because there are cases when we can have the same key from different nodes. In general, this step is done in parallel with the shuffle process.
Once the shuffle step ends, the Hadoop system will start to make another sort. In this moment Hadoop can control how the data is sorted and how the result will be grouped. This sort step give us the possibility to sort items not only by key, but also based by different parameters. This operation is executed on disk and also on memory.
The last step that needs to be executed is the Reduce. In the moment when this operation is executed, the final results will be written on disk. At this step, each tuple is formed from a key and a collection of values. From this tuple, the Reduce operation will select a key and only one value – the value will represent the final value.
Even if the Reduce step is very important, we can have cases when this step is not necessary. In this cases the intermediate data is the final result for the end user.
JobTracker, TaskTracker
The MapReduce operation requires two types of services - JobTracker and TaskTracker. This two types of services are in a master-slave relationship that is very similar with the one that we saw earlier on how the data is stored - NameNode and DataNode.
The main scope of the JobTracker is to schedule and monitor each action that is executed. If one of the operations failes, then the JobTracker is capable to rerun the action.
JobTracker discusses with the NameNode and programs the actions in a way that each job is executed on the DataNode that has the input data – in this way no input data is send over the wire.
TaskTracker is a node that accepts Map, Reduce and Suffle operations. Usually this is the DataNode where the input data can be found, but we can have exceptions from this rule. Each TaskTracker has a limited number of jobs that can be executed - slots. Because of this the JobTracker will try to execute jobs on the TaskTracker that has free slots.
From the execution model an interesting thing is the way how jobs are executed. Each job is executed on a separate JVM process. Because of this if something happens (an exception appears), only one job will be affected. The rest of the jobs will run without problems.  
Example
Until now we have discovered how MapReduce works. Theory is very good, but we need also to practice. I propose a small example that will help us to understand how MapReduce works. In this way we will be able to understand in a simple way how MapReduce is doing his magic.
We will start from the next problem. We have hundreds of files that contains the number of accidents from each city of Europe that happened every month. Because UE is formed from different countries that have different system we end up with a lot of files. Because of this, we have files that contains information from the cities of a country, others contains information for only one city and so on. Let’s assume that we have the following file format:
London, 2013 January, 120
Berlin, 2013 January, 300
Roma, 2013 February, 110
Berlin, 2013 March, 200
…
Based on the input data we need to calculate the maximum number of accidents that took place in each city during a month. This simple problem can become a pretty complicated one when we have 10 TB of input data. In this case Hadoop is the perfect solution for us.
The first operation from MapReduce process is Map. In this moment each file will be process and a key value collection will be obtained. In our case the key will be represented by the name of the city and the value will be the number of accidents. From each file we will extract the maximum number of accidents from each city during a month. This would represent the Map operation and the output would be something like this:
(London, 120), (Berlin, 300), (Roma, 110), (London, 100), (Roma, 210), …
This intermediate result has no value for us (yet). We need to extract the maximum number of accidents for each city. The Reduce operation will be applied now. The final output would be:
(London, 120)
(Berlin, 300)
(Roma, 210)
A similar mechanism is used by Hadoop. The power of this mechanism is the simplicity. Having a simple mechanism, it can be duplicated and controlled very easily over the network.
Conclusion
In this article we found out how Hadoop process the data using MapReduce. We discovered that the core of the mechanism is very simple, but is duplicated on all the available nodes. One each node we can have more than one job that can run in parallel. In conclusion we could say that all the tasks that are executed over the data are maximum paralyzed. Hadoop tried to use all the resources that are available.
As a remark, don’t forget that the native language for Hadoop is Java, but it has support for other languages like Python, C# or PHP.
Read More
Posted in Hadoop | No comments

Code Review and Under Stress Teams

Posted on 04:17 by Unknown
Nowadays, being agile is a trend. All the projects are agile in their own sense and of course each task needs to be reviewed. Today I would like to write about the review step.
The code review step is the process when a person looks over the code and tries to find mistakes that were missed by the developer that wrote that code. There are different ways how reviews are made. Normally, in big teams the review is made separately, without pair programing or similar things.
Because people don’t make the code review as they should, the value of this process decreases. In the end the management will realize that there is no value in code reviews and they will stop allocating time for this.

Why did they end up with this bad idea?
Because THE developers that are doing the reviews are not doing it right.
For example when the developer doesn’t have time for all tasks and is using the review time to finish other tasks. In this case the review will not be made. In the best case scenario they will only look over the code for 2-3 minutes. We will end up with a task that was not reviewed.
Another case is when the review is made because you have to do it. In this case you can end up with funny situations like a task that doesn’t contains any change sets is moved from review state to ready for tests. Even if theoretically the task contains some code changes (the developer assigned the change set to a wrong task id). For me this is a sign that the review will not be made by the team.    
The review time consumes a lot of resources and because the review is not made any more the management team will not see the value of it. They will still have major bugs; the quality of code is not good and so on.
What we could do in this case? How can we improve the process?
As one of my friend would say: KILL the team – they need to suffer! Pair programing is nice, but is not possible in all the situations.
Of course this is not a solution and we need to think at something that could motivate the team. The developers and PM should talk and see why the reviews are not made. There are a lot of reasons but in a team with a lot of good developers, the main reason I expect to be the TIME. Developers should require real time for review and not only virtual time.
Also, reviewing a sprint backlog item can hide mistakes. Reviewing a product backlog item could be more useful. To understand the real problem and the context, there are times when looking only on small task will not help understand the problem. More and more often I encourage this practice, because is more easily to see the real problems of the current solution (see next paragraph).
Educate the team how a review needs to be made. A code review is not only code styling review – filed name, class name, method name and things like that. When making a review we should look over other things like unit tests – Does the current implementation has unit tests? How the current implementation is integrated in the solution? Is the current solution good?
If you end up in this situation, I would not remove the review time. I would try to motivate team and why not change the rules of the game. An interesting solution for this problem would be to change the owner of a task from the person how implemented to the one that makes the review. In this way the person that makes the review will know that after he will mark the task as OK, he will be owner and responsible for the it.
Read More
Posted in | No comments

Friday, 7 June 2013

Certificates Hell - How you can kill all your clients

Posted on 12:07 by Unknown
Certificates, a lot of times I saw applications in production were down because of them. From big companies to small companies, each one had a lot of problems because of certificates. I would name this problems – Certificates Hell. Each problem that appeared because of them was caused by people or by a wrong process.
In this post I will tell you a short story on how you can make over 10.000+ clients unhappy. In a web application or a client/server application you need to use secure connection and of course certificates. The application contains a backend that is pretty complicated and more web applications. Additionally to this, there are also some native applications on iPhone, iPad, Android and Windows Phone. 
They are in production for about 5 years. In this period of time the web application changed, they started to create different native applications for mobile devices. Each native application contains also some certificates used to authenticate users and establish secure connection. 
 Everything was great until they decided to update their root certificate used by native applications. Guess what happened? All their mobile applications were down, none of their clients could use the mobile application anymore. 
Because they’ve updated the servers’ certificates, all their clients that had “hardcoded” the client certificates were down. All the certificates made from that application were refused by servers.
The solution for their problem was to push a new update for mobile applications. Of course updating a mobile application requires 1, 2 days - to reach to the clients. In all this period of time they had a lot of complains.
What they’ve done wrong? Two things.
Hardcoding the client certificates into the mobile application and change the server certificates without taking into account all the dependencies.
They had to learn it in the hard-way, like other big companies. 
Read More
Posted in | No comments

Wednesday, 5 June 2013

Coding Stories IV - Ordering a list

Posted on 07:21 by Unknown
Let’s look over the following code:
public List<Result> SomeFunction(string machineId)
{
...
if(machineId == null)
{
...
List<Result> results = GetAllResults();
return results.OrderByDescending(x => x.Date);
}
else
{
...
return GetResultsByMachine(machineId);
}
}
In production there was a bug reported by client. From time to time, clients received elements in wrong order.
The cause of this problem is not from the order function. This is a core function of .NET that works perfect.
The ordering is not made on the both blocks of the IF. Because in general the list were already ordered in database and the clients usually didn’t reach the ELSE block of the code, these case appear very rarely.
public List<Result> SomeFunction(string machineId)
{
...
List<Result> results
if(machineId == null)
{
...
results = GetAllResults();
}
else
{
...
results = GetResultsByMachine(machineId);
}

return results.OrderByDescending(x => x.Date);
}
This is a small detail that can generate odd behavior on the client machines.
Read More
Posted in | No comments

Tuesday, 4 June 2013

Windows Azure Billing model - per-minute granularity tip

Posted on 03:20 by Unknown
Windows Azure is changing. A lot of new features were released and existing one were improved. In this post I want to talk about one thing that was changed on Azure – the billing model.
Until now the paying granularity was per hour. This means that if I use a web-role for 30 minutes, then I would pay for an hour. If I use it for 1 hour and 1 minutes, then I would pay for 2 hours. Because other cloud providers offer services with per-minute granularity, Microsoft also decided to change the granularity to minutes.

This is great, you will pay only the time when you use a compute resource like web-roles, worker-roles, VMs, Mobile Services and so on). For classic application that use Azure for hosting and don’t scale (up and down) for short period of time this change will not affect the bill value at the end of the month – we will have the same flat value.
The true value of per-minute will be for application that scale up for very short period of time. For example we have a scenario where a client needs to process tens of millions of requests in a very short period of time. For example we want to process all this requests in 6 minutes – this task would repeat every day.
For this case when you need to scale for a very short period of time, a per-minutes payment solution is perfect. We can have for the same price 30 worker-roles that process the request instead of 10 or 15.
BUT, be aware of one thing – DON’T FORGET THAT YOU PAY FROM THE MOMENT WHEN THE INSTANCE STARTED AND THE DEPLOYMENT RUN ON THAT SPECIFIC INSTANCE.
This means that if you have a 6 minute job on the instance plus 10 minutes to start the instance and deploy your solution plus maybe 1 or 2 minute to stop the machine you will end up with 17-18 minutes. Is better then paying for a full hour, but we need to take care of this aspect when we prepare a cost estimation.

In conclusion, this change is great and give us the possibility to scale more with the same cost.
Read More
Posted in | No comments

Monday, 3 June 2013

[Post-Event] Cloud Era at Faculty of Science - Sibiu

Posted on 07:28 by Unknown
This weekend I had the opportunity to be invited by Faculty of Science from Sibiu. I have a 3 hours session where I talked about cloud in general and what are the feature of Windows Azure.
Even if the session was during Saturday morning, a lot of students were interest to discover the secrets of cloud and Azure. My slides and the demo code can be found at the bottom of the post.
Slides:

Cloud and Windows Azure from Radu Vunvulea
Demo:
http://sdrv.ms/13gbjVW

Read More
Posted in eveniment, event | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Service Bus Topic - Automatic forward messages from a subscription to a topic
    Windows Azure Service Bus Topic is a service that enables us to distribute the same messages to different consumers without having to know e...
  • CDN is not the only solution to improve the page speed - Reverse Caching Proxy
    I heard more and more often think like this: “If your website is to slow, you should use a CDN.” Great, CDN is THE solution for any kind of ...
  • Content Types - Level 6: Rich Media
    Level 6: Rich Media NOTE: This is part 7 of 7 and the conclusion of this continuing series; please see earlier posts for more background inf...
  • Publishing our CellCast Widget for iPad
    The rush has been on this week as our development team worked to design a new version of our CellCast Widget specifically for Apple's up...
  • Patterns in Windows Azure Service Bus - Message Splitter Pattern
    In one of my post about Service Bus Topics from Windows Azure I told you that I will write about a post that describe how we can design an a...
  • E-Learning Vendors Attempt to Morph Mobile
    The sign should read: " Don't touch! Wet Paint !" I had a good chuckle today after receiving my latest emailed copy of the eLe...
  • SQL - UNION and UNION ALL
    I think that all of us used until now UNION in a SQLstatement. Using this operator we can combine the result of 2 queries. For example we wa...
  • Cum sa salvezi un stream direct intr-un fisier
    Cred ca este a 2-a oara când întâlnesc aceasta cerința in decurs de câteva săptămâni. Se da un stream și o locație unde trebuie salvat, se c...
  • Task.Yield(...), Task.Delay(...)
    I think that a lot of person already heard about these new methods. In this post I want to clarify some things about these new methods that ...
  • Content Types - Level 4: Reference
    Level 4: Reference Materials & Static Content NOTE: This is part 5 of 7 in a continuing series; please see earlier posts for more backgr...

Categories

  • .NET
  • .NET nice to have
  • #if DEBUG
  • 15 iunie 2011
  • 15 octombrie 2011
  • 2011
  • abstracta
  • action
  • adaugare
  • ajax
  • Amsterdam
  • Android
  • aplicatii
  • App Fabric
  • Apple iSlate
  • array
  • as
  • ASP.NET
  • AsReadOnly
  • Assembly comun
  • async
  • Asynchronous programming
  • asyncron
  • Autofac
  • AutoMapper
  • az
  • Azure
  • Azure AppFabric Cache
  • Azure backup solution
  • Azure Storage Explorer
  • azure. cloud
  • backup
  • BCP utility
  • bing maps v7
  • BitArray
  • BlackBerry
  • blob
  • BlobContainerPublicAccessType
  • breakpoint
  • bucuresti
  • C#
  • cache
  • CallerMemberName
  • CellCast
  • Certificate
  • CES
  • change
  • ChannelFactory
  • clasa
  • classinitialize
  • clean code
  • click event
  • close
  • Cloud
  • Cluj
  • cluj-napoca
  • Code contracts
  • code retrat
  • codecamp
  • CollectionAssert
  • Compact Edition
  • compara
  • Comparer T .Default
  • CompareTo
  • comparison
  • comunitate
  • concurs
  • Conditional attribute
  • configurare
  • connection string
  • container
  • content type
  • control
  • Convert
  • convertAll
  • convertor
  • cross platform
  • CRUD
  • css
  • custom properties
  • custom request
  • DACPAC
  • Daniel Andres
  • data sync service
  • database
  • date time
  • datetime
  • debug
  • default
  • delegate
  • dependency injection
  • deploy
  • DeploymentItem
  • design patterns
  • Dev de Amsterdam
  • development stoage
  • dictionary
  • diferente
  • digging
  • director
  • Directory.Exist
  • disable
  • dispatcher
  • dispose
  • dropdown
  • dynamic
  • EF
  • email
  • encoding
  • entity framework
  • enum
  • enumerable
  • Environment.NewLine
  • error
  • error 404
  • error handling
  • eveniment
  • event
  • ews
  • excel
  • exception
  • exchange
  • exita
  • explicit
  • export
  • extension
  • field
  • File.Exist
  • finalize
  • fire and forget
  • Fluent interface pattern
  • format
  • func
  • GC.SuppressFinalize
  • generic
  • getdirectoryname
  • globalization
  • gmail
  • hackathon
  • Hadoop
  • handle
  • HTML
  • html 5
  • Html.ActionLink
  • http://www.blogger.com/img/blank.gif
  • HttpModule
  • IComparable
  • IE
  • ienumerable
  • IIS
  • image
  • implicit
  • import
  • int
  • internationalization
  • Internet Explorer
  • interop
  • Ioc
  • IP Filter
  • iPhone
  • iQuest
  • IStructuralEquatable
  • ITCamp
  • itspark
  • java script
  • javascript
  • July 2012
  • KeyedByTypeCollection
  • KeyNotFoundException
  • Kinect SDK
  • lambda expression
  • LightSwitch Microsoft Silverlight
  • linq
  • list
  • lista
  • lista servicii
  • liste
  • Live Connect
  • Live ID
  • load
  • localization
  • lock
  • m-learning
  • MAC
  • Mango
  • map
  • mapare
  • mapare propietati
  • messagequeue
  • meta properties
  • method
  • MethodImpl
  • Metro App
  • Microsoft
  • Microsoft Sync Framework
  • mlearning
  • mlearning devices
  • Mobile Apps
  • mobile in the cloud
  • mobile learning
  • mobile services
  • Mobile Web
  • mongoDb
  • monitorizare
  • msmq
  • multitasking
  • MVC
  • MVC 3
  • MVVM
  • namespace
  • nextpartitionkey
  • nextrowkey
  • Ninject
  • nivel acces
  • no result
  • normalize
  • nosql
  • null expcetion
  • null object pattern
  • NullReferenceException
  • OAuth API
  • office
  • offline
  • Open ID
  • openhackeu2011
  • operations
  • operator
  • optimization
  • option
  • outputcache
  • OutputCacheProvider
  • override
  • paginare
  • pagination
  • path
  • persistare
  • Portable Library tool
  • Post event – CodeCamp Cluj-Napoca
  • predicate
  • predictions
  • prezentare
  • process
  • proiect
  • property
  • propietati
  • query
  • ReadOnlyCollection
  • ReadOnlyDictionary
  • referinta
  • reflection
  • remote
  • reply command
  • request
  • request response
  • resouce
  • REST
  • REST Client
  • RESTSharp
  • ronua
  • rss
  • rulare
  • salvare in fisier
  • sc
  • schimbare timp
  • select
  • select nodes
  • send
  • serializare
  • serialization
  • Server.Transfer. Resposen.Redirect
  • service bus
  • ServiceBase
  • servicecontroller
  • sesiune
  • session
  • Session_End
  • Session_Start
  • setup
  • Sibiu
  • signalR
  • Silverlight
  • sincronizare
  • Single Responsibility Principle
  • SkyDrive
  • skype
  • smartphones
  • smtp
  • Snapguide
  • sniffer
  • socket
  • solid
  • spec#
  • sql
  • Sql Azure
  • SQL CE
  • sql server 2008 RC
  • SRP
  • startuptype
  • stateful
  • stateless
  • static
  • stergere
  • store
  • store procedure
  • stream
  • string
  • string.join
  • struct
  • StructuralEqualityComparer
  • submit
  • switch
  • Symbian
  • Synchronized
  • system
  • tabele
  • table
  • techEd 2012
  • tempdata
  • test
  • testcleanup
  • testinitialize
  • testmethod
  • thread
  • timer
  • ToLower
  • tool
  • tostring
  • Total Cost Calculator
  • trace ASP.NET
  • transcoding
  • tuplu
  • tutorial
  • TWmLearning
  • type
  • unit test
  • unittest
  • UrlParameter.Optional
  • Validate
  • validation
  • verificare
  • video
  • view
  • ViewBag
  • virtual
  • visual studio
  • VM role
  • Vunvulea Radu
  • wallpaper
  • WCF
  • WebBrower
  • WebRequest
  • where clause
  • Windows
  • windows 8
  • Windows Azure
  • Windows Azure Service Management CmdLets
  • windows live messenger
  • Windows Mobile
  • Windows Phone
  • windows service
  • windows store application
  • Windows Task
  • WinRT
  • word
  • workaround
  • XBox
  • xml
  • xmlns
  • XNA
  • xpath
  • YMesseger
  • Yonder
  • Zip

Blog Archive

  • ▼  2013 (139)
    • ►  November (17)
    • ►  October (12)
    • ►  September (10)
    • ►  August (7)
    • ►  July (8)
    • ▼  June (15)
      • TechEd 2013 - Day 4
      • TechEd 2013 -Day 3
      • TechEd 2013 - Day 2
      • TechEd 2013 - Day 1
      • TechEd 2013 - Pre-Conference Day
      • WPF - Binding of TextBox in a Grid
      • Topic Isolation - (Part 6) Testing the limits of W...
      • Coding Stories VI - Inconsistent property naming
      • Coding Stories V - Serialization with XmlArrayItem
      • MapReduce on Hadoop - Big Data in Action
      • Code Review and Under Stress Teams
      • Certificates Hell - How you can kill all your clients
      • Coding Stories IV - Ordering a list
      • Windows Azure Billing model - per-minute granulari...
      • [Post-Event] Cloud Era at Faculty of Science - Sibiu
    • ►  May (12)
    • ►  April (17)
    • ►  March (16)
    • ►  February (9)
    • ►  January (16)
  • ►  2012 (251)
    • ►  December (9)
    • ►  November (19)
    • ►  October (26)
    • ►  September (13)
    • ►  August (35)
    • ►  July (28)
    • ►  June (27)
    • ►  May (24)
    • ►  April (18)
    • ►  March (17)
    • ►  February (20)
    • ►  January (15)
  • ►  2011 (127)
    • ►  December (11)
    • ►  November (20)
    • ►  October (8)
    • ►  September (8)
    • ►  August (8)
    • ►  July (10)
    • ►  June (5)
    • ►  May (8)
    • ►  April (9)
    • ►  March (14)
    • ►  February (20)
    • ►  January (6)
  • ►  2010 (26)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (4)
    • ►  March (1)
    • ►  February (1)
    • ►  January (14)
Powered by Blogger.

About Me

Unknown
View my complete profile