Windows Mobile Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Thursday, 28 February 2013

log4net - Some fun with appenders

Posted on 07:31 by Unknown
One of the most frequent logger mechanisms that is used in our days is log4net. This is an open source framework that can be found on different programming language, like Java and C#.
I played a little with log4net in the last period of time to see if log4net can become a bottleneck of an application. What I observed until now is the call duration of the appender is the longest call. Even if C# knows about async calls to IO resources, log4net will make all this calls in a sync way.
Usually if we have around 100 messages per second, the default log4net mechanism will be perfect for us. For these cases, if you want to improve the performance we should use the TraceAppender. Yes, the default appended from .NET. It works great and is pretty fast. This is a pretty good option if you don’t want to use a buffer appender. There are a lot of frameworks that used Trace – don’t be afraid of using it.
Another option is to use buffer appender. This is an appender that will not send messages immediately. We will send the messages only in the moment when there will be a specific number of messages in the buffer. The log4net already has this kind of appender defined (“BufferingForwardingAppender”). You should know that even if we are using the buffer the IO calls are still made sync. This means that in the moment when the buffer will be full and need to flush the content, there will be a sync IO call.  
A nice feature of this appender is the lossy option. Using this option you can set the buffer to flush the content in the case when a specific type of message is wrote into the buffer – for example when an error is logged.
What we observed until now is the way how the IO is used – we have only sync calls to the files. Because of this we could have some bottlenecks at this level. Theoretically we could improve the log4net performance if we would use async calls – when writing to files.
I didn’t have time to implement and measure, but I think that it would be pretty interesting. One solution is to make the calls async at the appender level. We could make the calls that write the buffered content to run on a different thread. This solution could cause problems because creating and working with thread is a pretty expensive thing – from resource perspective.
Another option would be to use async write calls to IO. For example we could use IO completion ports. This would be a pretty clever thing to do, but is a little bit complicated. Playing with IO completion ports is not simple.
The last option that I see valid is to use a thread (maybe a background thread) that writes the content to IO. Using this method, our application will be able to send content to the log4net without the need to wait after log4net to append/persist the content. The real action of writing the content to IO (file for example) will be made by the second thread. The drawback is from the second thread. It will need to run all the time. This thread will be created by appender. It is not important how we will append the content (async/sync) way, because we are already on another thread and the log4net calls will don’t need to wait until the content is written.
Until now I didn’t heard people to have problems with log4net – performance problems. If we configure log4net properly, we should not have any kind of problems. This investigation was only for fun, to see if we could improve the performance of log4net.
Read More
Posted in | No comments

Wednesday, 27 February 2013

Converters and IoC containers

Posted on 02:37 by Unknown
Now, using an IoC container is a have to for any kind of project. Even in small and simple project, people started to use an IoC. It is very hard for me to understand why you would use an IoC for any kind of project, because you will raise the complexity without needing it. Also people add to IoC all the objects, they don’t ask themselves if they really need that object in IoC.
What do you think about converters added to the IoC?
Very often people add all the converters to the IoC. Even if there are converters used in only one class. In a big project we can end up with hundreds of converters in the IoC container. Theoretically you could have the case when you need to replace a converter with another one, but how often this will be needed. Also, if you have converters that are used in only one place, changing the converter will required to change the class also (maybe).
I would not add any kind of converters to the IoC. The only type of converters that I would add to the IoC are the one that are used on different components and there is a big chance to be changed. To tell you the true, until now I didn’t had the opportunity to meet that kind of converter : ) .
For example a converter that converts an entity that is received through the wire, will need to be changed if the entity that is received is changed. But in the same time, there are big chances to need to change also your component. Because of this I would not add this converter to the IoC.
In conclusion we could say that in general a converter doesn’t need to be added to IoC. The cases when we really need a converter in the IoC container are rare and isolated.

Read More
Posted in Ioc | No comments

Saturday, 23 February 2013

Windows Azure Virtual Machine - Where is my data from disk D:

Posted on 08:37 by Unknown
Today we will talk about what kind of disk are available on a virtual machine (VM) on Windows Azure. I decided to write this post after someone asked me why the content of disk D: was lost after a restart. He told me that this is a bug of Azure, but is not.
When you are creating a Windows Azure Virtual Machine you will observe that you have more than one disk attached to it. There are 3 types of disk that can exist on a VM on Azure:
  • OS Disk – drive C:
  • Temporary Storage Disk – drive D:
  • Data disk – drive E:, F:, …
The OS disk contains the operating system. This is VHD that can be attached to the machine and contains the operation system. You can create custom VHD that contains the operating system and all other application that you need. In this moment the maximum size of this disk is around 124GB. If you need more space, you can use the data disks.
Each VM can have one or more data disk attached to it. Each VHD that is attached to the VM can have maximum 1TB and the maximum number of data disks that can be attached to a VM is 20. I thing that there is enough space available for any scenario that we imagine.
The Temporary Storage disk is used to store temporary information. For example if you need to cache different content like pictures or documents. In the case of the restart or if something happens with the machine, all the content of this disk is lost. Because of this you should never store data that need to be persisted on this disk. The scope of this disk is not to persist data.
Usually people tend to use this disk to persistent data because they are looking only on the happy flow, when the machine don’t crash or a restart is happening (and don’t read the MSDN). In the happy flow you can have the sensation that the temporary storage disk (D:) can be used to persist your data. You can see and access your data all the time.
Don’t try to use the OS disk to persist data. The same thing will happen when something goes wrong and the machine restart. Your original VHD image will be attached. Because of this all the changes will be lost.
If you need to persist any kind of information you should use the data disks. The information from this disk will not be lost in the case of a crash. Also, you should know that all the VHD images are stored in the blob storage.
From my perspective, all the information that need to be persisted should be stored in the blob. Why? Because you will need to access this data from more than one location.
In conclusion, we shouldn’t use disk D: (temporary storage disk) to store data that need to be persisted. The best place to store this kind of data is data disks or blobs.
Read More
Posted in Azure, Windows Azure | No comments

Monday, 11 February 2013

Representing sub-system dependencies

Posted on 00:54 by Unknown
When we start designing a big system we might also think on how to split our solution in sub-systems. We can end-up having a lot of sub-systems with different dependencies.  The same thing will happen if we start splitting a sub-system in components, we will have a lot of components with different dependencies.
How we can represent these dependencies in a simple and clean way?
I saw different solutions where you can end up with complicated schemas or with trees. Both solutions are complicated to read and people with spend some time understand these dependencies. X depends on Y and Z and so on.
This month I read “Software Architecture in Practice” written by L. Bass, P. Clements and R. Kazman.
I discovered a great and simple way to represent all these dependencies. Dependencies can be represented in a simple table where we will have on diagonal our sub-systems, or different components.
Each input resource that is needed by a sub-system will be on columns. Each resource will be placed in the cell of the row that offers these resources. In the same row, around our sub-system we will be able to see the output of our sub-system and what are sub-systems that depend on our sub-system.
In the following example we have 3 sub-systems. Aircraft System Group (ASG), Avionics Group (AG) and Environment Group (EG). We can observe very easily that the EG depends only on the AG and he need the Ownship and Emissions from it. Also the output of this sub-system is used only by the AG that use the Environment and Emitter Data input.
This example is taken from the book.
As we can observed it is very easily to understand how each sub-system and component interact. Based on this table we can see what sub-systems have a lot of dependencies. Also when we want to change a sub-system we can easily identify the sub-systems that may be affected of this change.
I encourage you to try this approach. If you have time, I really recommend to read the book.
Read More
Posted in | No comments

Thursday, 7 February 2013

Shared Access Signature and Access Level on Blob, Tables and Queues

Posted on 23:43 by Unknown
Some months ago I have some posts about Shared Access Signature (SAS). Yesterday I received a simple question that appears when we start to use SAS over Windows Azure Storage (blobs, tables or queues).
When I’m using Shared Access Signature over a blob, should I change the public access level?
People can have the feeling that from the moment when you start using the SAS over a container or a blob, people will not be able to access the content in the classic way. SAS don’t change the public access level, because of this, if your blob is public, than people will be able to access it using SAS token or with a normal URL.
To be able to control the access to a container or to a blob using only SAS you will need to set the access level of the content to private. This can be made from different locations (Windows Azure Portal, different storage clients or from code). Having a container of blob with the access level set to private means that people with account credentials will be able to access the content.
I recommend you to have different containers for the content that needs to be public and private (and the private content is access using SAS). In this way the content management will be easier. Also, try to generate SAS tokens per blob and not per container, when is possible.
Using the storage account name and access key anybody can access our storage account, even if we are using SAS. From the moment when we start using SAS, our client should not have access to our storage account credentials. Also, using storage account credentials anybody can change our SAS configuration.

In conclusion we could say that from the moment when we start using SAS we should switch the access rights of the blobs and container to private.

Read More
Posted in Azure, Windows Azure | No comments

SQL Azure Export Process - Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name]

Posted on 03:35 by Unknown
Using a database involve a backup mechanism. If we are using SQL Azure and the new Windows Azure portal we will be able to make manual backup very easily.
In the case our database is deployed long time ago, you can discover that an error will occur during the export process. This error appears usually only at store procedures and the error message is similar to this:
Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name]
This is happening because your store procedure contains in the table path or in the store procedure path the name of the database also. The solution for this problem is quite simple but will require changing of all your database scripts that contain the database name in the path.
Each line of script that refers to a specific database named need to be changed in a way that will not contain the database name. After you make this change you will need to update all your scripts.
Before:
INSERT INTO [myFooDatabase].[dbo].[TableName] (…) …
After:
INSERT INTO [dbo].[TableName] (…) …
Read More
Posted in Azure, Windows Azure | No comments

Wednesday, 6 February 2013

CRON job in Windows Azure - Scheduler

Posted on 02:04 by Unknown
Yesterday I realized that I have to run a task on Windows Azure every 30 minutes. The job is pretty simple; it has to retrieve new data from an endpoint. This is a perfect task for a CRON-base job.
The only problem with the current version of Windows Azure is that it doesn't have support for CRON jobs. People might say that we can have a timer and every 30 minutes we could run the given task. This is a good solution and it will work perfectly, but I wanted something different.
I didn’t want to create my own CRON-base job. I wanted something build-in in the system. I started to look around and I found an add-on for this. So, Windows Azure offers a Store for any company that want to offer different add-ons for Windows Azure. These add-ons can be very easily installed. If they are not free, the payment method is quite simple. Each month the Azure subscription will contain the cost of these add-ons. From my perspective this is a pretty simple and clean mechanism of payment.
Under the store I discovered the “Scheduler” add-on, offered by ADITI Cloud Services. This add-on gives us the possibility to create different jobs that are called at a specific time interval. We don’t need a timer, another machine or something similar.
How it works? It is based on normal HTTP requests that will be made automatically to your machine. Their servers will call your machines when a job needs to be executed. In this moment, they support only HTTP, without any kind of authentication. I expect in the near future to have support for authentication and HTTPS.
In this moment the service is free and you can execute around 5000 jobs per month for free. This mean that you can trigger a job every ~9 minutes.
Let’s see some code now. After you install the add-on from the Windows Azure Store, the “Scheduler” will generate a tenant id and a secret key. This will be used from your application when you will need to configure the jobs.
After this step, we need to install a NuGet package called “Aditi.Scheduler”. This will contain all the components that we need to be able to configure and use this add-on.
In our application we have to create an instance of “ScheduledTasks”. Using this instance we can create, modify or delete jobs.
ScheduledTasks scheduledTasks = new ScheduledTasks([tenantId], [secretKey]);

ScheduledTasks task = new TaskModel
{
Name = "MyFooJob",
JobType = JobType.Webhook,
CronExpression = "0 0/5 * 1/1 * ? *",
Params = new Dictionary<string, object>
{
{"url", "http://foo.com/service1"}
}
};

scheduledTasks.CreateTask(task);
Each job can be changed, deleted and so on. What we should remember is to delete a job when we stop using it. Even if our solution will not be deployed anymore, the job will still be trigged each time. Because this solution is based on HTTP request, we need to expose a REST service from where we want to trigger our job.
A cool thing that we already have is the different type of jobs. We don’t have only web jobs but also jobs that use Service Bus Queue or Azure Queue. In this way we can listen to an Azure Queue from our application and our job will be triggered when a specific message is found in the queue. This feature can be used on worker roles that don’t have a HTTP endpoint exposed.
In conclusion I could say that this is a pretty interesting add-on that has a lot of potential.
Read More
Posted in Azure, Windows Azure | No comments

Monday, 4 February 2013

Workflows over Windows Azure

Posted on 03:43 by Unknown
Nowadays, almost all enterprise applications have at least a workflow defined. Not only complex application need to contain workflow. Even a simple ecommerce application can have a workflow defined to manage the orders or the product stocks for example.
Supporting a workflow in our application can be made in two ways. The first approach is to search on the market what kinds of solutions are available and choose the most suitable for our project. Using this approach will offer a workflow mechanism, but in the same time can generate other costs through licensing and/or developing custom functionality. 
 The second approach is to start developing the workflow mechanism from scratch. This solution can be pretty tricky because there are a lot of problems that need to be resolved. Failover mechanism, rules definition, guaranty that each message from the workflow will not be lost and many more needs to be define and implement by our own.
All the data that are flying through the workflow will need to be persisted somewhere. There are different solutions that can be used, from relational databases to NoSQL or in-memory database. Any of this persistence method that will be used will consume resources of our infrastructure.
 Beside this, applying the rules that are defined in the workflow require a lot of computation power. Even simple rules can become a nightmare if you need to process 100.000, 200.000 or even 500.000 messages per hour.
One of the most important thing that is also required when we use a workflow mechanism is the availability. We don’t want to have an ecommerce application that cannot accept new orders because the workflow mechanism is down or is too busy with other orders. Even if we have a workflow mechanism that is very scalable, more instances will mean for us more resources and in the end more money.
Until now we saw different requests of a workflow mechanism. All this requests are translated for us in time, resources and money.
Windows Azure can help us when we need a mechanism for workflows. Windows Azure Service Bus offers us the possibility to define a workflow very easily. We can define rules, states and custom actions while the system is running.
First of all let’s find out what is Windows Azure Service Bus. It is a brokered messaging infrastructure that can deliver a message to more than one listener. Each listener needs to subscribe their interest to a specific topic. Messages can be added to the system through the topic. Once a message is added to the topic, the Windows Azure infrastructure will guarantee us that the message will be deliver to all subscribers.
The power of Windows Azure Service Bus related to workflows is the filtering mechanism that can be defined at subscription level. This means that each subscription can have attached one or more rules. These rules will be used by the subscription to accept only the messages that respect the given rules.
Figure 1: Workflow definition over Service Bus
The rules that can be defined can make different checks, from simple ones that compare strings (flags) to more complex ones. Using these rules we can define a workflow over one or more topics from Windows Azure Service Bus. Each state of our workflow can have a subscription assignee. This will guaranty that messages with a given state will be received only by a specific subscription. In this way we can have subscription that will process messages only with a given state.
From the scalability point of view, we can have more than one subscriber for each subscription. This means that messages with a give state can be processed in paralleled by multiple instances. A message from a subscription will be received by only one subscriber (listener).
A message can be consumed from the subscription in two ways – Peek and Lock or Receive and Delete. Using the first method, a message will be removed from the subscription only when the receiver will confirm that the message was processed with success. Otherwise the message will be available for consummation again. We have support for Death Letters, this means that we can mark a message as corrupted and it will be moved to a sub-topic that will contain messages marked with this flag. A nice feature related to Death Letters is the support to mark a message as death letter automatically when the number of retried reach a specific value.
Using Windows Azure give us the possibility to define a custom action that can be executed over the message in the moment when a message arrives in a subscription. For example we can add a new property to the message that represents the sum of other two properties. Using this feature we can very easily change the properties of an item when the state is changed.
If we have special cases when we can change the state of items from one state to another without custom actions, that we can use the forward feature of subscription. Windows Azure Service Bus gives us the possibility to forward a message to another topic automatically. In this way we don’t need to retrieve the message from the subscription and forward it to the topic.
Windows Azure Service Bus is a system that is very scalable, can support as many as 10.000 topics per each service namespace. Each topic can have maximum 2.000 subscriptions and 5.000 concurrent receive requests. This means that we can define on the same topic a workflow that has 2.000 states. Also, nothing stops us to define a workflow that uses more than one topic.
From the cost perspective, we will be charged with 0.01$ per 10.000 messages that are send or delivered by Windows Azure Service Bus. This means that we can send 1 million messages to service bus with only 1$. If you use this service from an application that is hosted in the same datacenter you will not be charged data traffic. Otherwise, the outbound traffic will be charged with a rate that starts with 0.15$ per GB.
Workflow Manager is the predecessor of Windows Workflow Foundation and was lunched at the end of last year. This started to support integration of workflows with Windows Azure Service Bus, offering a better support for reliability, asynchronous processing and coordination.
In conclusion we saw that defining a workflow mechanism using Windows Azure Service Bus can simplify our workflow mechanism. This service is available from any location of the world and is very scalable. With features like death letter, automatically message forwarding and the guaranty that messages are not lost, Windows Azure Service Bus is one of the best candidates when we need to use a workflow mechanism.
Read More
Posted in Azure, Windows Azure | No comments

Saturday, 2 February 2013

Scalability points on Cloud

Posted on 07:06 by Unknown
Cloud - another buzz word that we hear almost every day. For the moment the providers that offer this service are: Amazon, Microsoft (Windows Azure), Google, Rackspace.
When we think about cloud what comes to our mind? One, two or more instances that we keep on the cloud and when we need more resources we can grow very easy the number of instances.
At the moment a cloud provider like Microsoft gives use some scalability points. In this article we will find out how to create a scalable cloud application and how to use the services that Windows Azure provide us.
Content Delivery NetworkLet’s say that we have a web application that has a lot of static content. When we say static content we think about images, CSS, HTML that doesn’t change every second (at every request). Normally when we observe that the loading on our machines is quite high we will try to increase the instance number. This seems to be a good idea for our problem, but in terms of cost it might not be the best.
Even though we try to make some cache at the server content (through IIS or other methods) all the request will still reach to our machines. That’s why for every request they will have to respond and so we consume our available resources.
A solution for this case may be using Content Delivery Network (CDN). Through this service the static content can be cached on different CDN’s depending on the physical location of the client. All the requests for this kind of content will be solved by the CDN’s. Normally a CDN can handle only static content, but newer version of CDN can also handle a dynamic content. This service is also provided by Windows Azure and can be easily integrated in our applications.
By using this kind of content delivery mechanism our machines that host our application will not be hit at every resource request. Automatically the load level will decrease.
Cache
The next place in our application where we can easily introduce a scalability point is the cache. To avoid making numerous requests to external services or to recalculate different parameters, we can use a cache mechanism.
Windows Azure provides this service in different ways. The advantage of using a service to cache dates is in the moment when we need to scale the cache mechanism and we don’t have to buy a machine, licenses or to synchronize this nodes.
Currently when we create machine with Windows Azure we can specify how much RAM to be allocated to the cache. Another caching system is to create some dedicated machines for this where the entire RAM will be allocated for caching. In both cases synchronize data between two and more instances cache instances is natively supported. These settings are made before deploying the solution.
The third option available is to use a dedicated cache service. In this case, we don’t have to manage our instances; these resources are entirely seen as a service. 
Using a caching mechanism of this kind takes off the problems that can occur in traditional caching mechanism.  Problems like synchronization or adding a new cache node disappear.
Video stream
Until now we managed to analyze two classic problems that appear on every web application. But what can we do if we need a video stream? As we all know this thing is extremely costly not only in terms of resources consumed on the server side and the internet band.
Normally we can have machines to handle the video streaming. The moments when we have a lot of active users can cause us some pretty big problems. For this situation Windows Azure helps us with a dedicated service. Windows Media Services handles completely the video streaming starting with the encoding and ending with the encryption and delivering the content (offline and online).
This service frees our instances which would have had to handle the delivering and processing part. Being a dedicated service for this the scalability is ensured from the start.
Web services
Until now we’ve seen three methods where we can scale using different cloud services without the need of increasing the number instances. Depending on the application we can expose different web service that collects various information from the clients.
What can we do if there are times when a big number of clients wants to connect to us? Usually we would try to allocate a bigger number of instances to handle this.
One solution that Windows Azure offers is Service Bus Relay. Through this service we can expose any web service of type “fire and forget”. All requests will be store in the form of message and our service will be able to process them anytime. Even though the number of requests increases greatly, the service will be always available.
Windows Azure Service Bus Relay can be easily integrated in our application. The only change you need to do on your application that uses WCF is altering the configuration file.
Communication between instances
Generally when we have a complex application we will have different types of machines on which will run different components of our application. It will be necessary to ensure a persistent and independent way of communication between this instances.
We can try to implement a communication system through a database or through another instance to handle only this thing. When we will have to scale it will be necessary to think about a way of resolving issues like synchronization between instances.
 Windows Azure comes in our help by providing different ways of sending messages. All this services are accessed using a URL and can be accessed from anywhere on the Internet.
 Most basic service, but also with the lowest cost is the Windows Azure Queues. This queue allows us to distribute messages in a very simple, quick and cheap way. If we have to distribute messages to more consumers, to have support for death-letters or to guaranty the message order than we have to work with Windows Azure Bus Queues and/or Windows Azure Service Bus Topics.
Data transmission
A client-server application could mean a permanent exchange of data. Many times this thing mean exposure of various web services through which our clients can execute queries on a database. In this cases our application will have to contain instances that expose this service and a database (relational or non-relational). Besides keeping this instances we will also have to deal with the maintenance of the web services.
In such cases we imagine our system from another point of view. We can stock and expose dates through Windows Azure Table Storage Service. This non-relational tables allows us to stock hundreds of GB of content without any problems.
When we want to use such a solution we may end up with two questions: what information can access each client and audit. Through some tokens which we can generate for each client we can define what content the customer has access, what kind of operations can perform and how long a token is valid. This functionality is called Shared Access Signature and allows us to eliminate from the equation the instances that should offer data to the customers. These tables should not show how we internally store our data, but the data that we want to offer our customers.
Windows Azure Storage Service natively supports the audit, all we have to do is to activate it and to specify what kind of operation we want to do audit.
Using such a solution we will need to have a component to handle the generation and management of these tokens. We don’t need other components.
Conclusion
We analyzed different ways in which we can make our application scalable. Starting with different cache systems or systems for sending messages, to services that allows us to do video streaming without raising any problems of allocation or security.
The most important thing that we have to do when we think on a cloud application is to try to identify all the points where we need scaling and to look for services that can offer this. If we need features that are not already on the cloud we find it is better to separate them on different instance or at least different processes. If we will need to scale it will be easier for us to increase the number of instances that deal with a specific function.
Today a cloud solution means more than several machines running our application with a load balancer – horizontal scalability. Cloud means a multitude of services that come to our aid to create more complex applications and to scale where we need.
Read More
Posted in Azure, Windows Azure | No comments
Newer Posts Older Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • Service Bus Topic - Automatic forward messages from a subscription to a topic
    Windows Azure Service Bus Topic is a service that enables us to distribute the same messages to different consumers without having to know e...
  • CDN is not the only solution to improve the page speed - Reverse Caching Proxy
    I heard more and more often think like this: “If your website is to slow, you should use a CDN.” Great, CDN is THE solution for any kind of ...
  • Content Types - Level 6: Rich Media
    Level 6: Rich Media NOTE: This is part 7 of 7 and the conclusion of this continuing series; please see earlier posts for more background inf...
  • Publishing our CellCast Widget for iPad
    The rush has been on this week as our development team worked to design a new version of our CellCast Widget specifically for Apple's up...
  • Patterns in Windows Azure Service Bus - Message Splitter Pattern
    In one of my post about Service Bus Topics from Windows Azure I told you that I will write about a post that describe how we can design an a...
  • E-Learning Vendors Attempt to Morph Mobile
    The sign should read: " Don't touch! Wet Paint !" I had a good chuckle today after receiving my latest emailed copy of the eLe...
  • SQL - UNION and UNION ALL
    I think that all of us used until now UNION in a SQLstatement. Using this operator we can combine the result of 2 queries. For example we wa...
  • Cum sa salvezi un stream direct intr-un fisier
    Cred ca este a 2-a oara când întâlnesc aceasta cerința in decurs de câteva săptămâni. Se da un stream și o locație unde trebuie salvat, se c...
  • Task.Yield(...), Task.Delay(...)
    I think that a lot of person already heard about these new methods. In this post I want to clarify some things about these new methods that ...
  • Content Types - Level 4: Reference
    Level 4: Reference Materials & Static Content NOTE: This is part 5 of 7 in a continuing series; please see earlier posts for more backgr...

Categories

  • .NET
  • .NET nice to have
  • #if DEBUG
  • 15 iunie 2011
  • 15 octombrie 2011
  • 2011
  • abstracta
  • action
  • adaugare
  • ajax
  • Amsterdam
  • Android
  • aplicatii
  • App Fabric
  • Apple iSlate
  • array
  • as
  • ASP.NET
  • AsReadOnly
  • Assembly comun
  • async
  • Asynchronous programming
  • asyncron
  • Autofac
  • AutoMapper
  • az
  • Azure
  • Azure AppFabric Cache
  • Azure backup solution
  • Azure Storage Explorer
  • azure. cloud
  • backup
  • BCP utility
  • bing maps v7
  • BitArray
  • BlackBerry
  • blob
  • BlobContainerPublicAccessType
  • breakpoint
  • bucuresti
  • C#
  • cache
  • CallerMemberName
  • CellCast
  • Certificate
  • CES
  • change
  • ChannelFactory
  • clasa
  • classinitialize
  • clean code
  • click event
  • close
  • Cloud
  • Cluj
  • cluj-napoca
  • Code contracts
  • code retrat
  • codecamp
  • CollectionAssert
  • Compact Edition
  • compara
  • Comparer T .Default
  • CompareTo
  • comparison
  • comunitate
  • concurs
  • Conditional attribute
  • configurare
  • connection string
  • container
  • content type
  • control
  • Convert
  • convertAll
  • convertor
  • cross platform
  • CRUD
  • css
  • custom properties
  • custom request
  • DACPAC
  • Daniel Andres
  • data sync service
  • database
  • date time
  • datetime
  • debug
  • default
  • delegate
  • dependency injection
  • deploy
  • DeploymentItem
  • design patterns
  • Dev de Amsterdam
  • development stoage
  • dictionary
  • diferente
  • digging
  • director
  • Directory.Exist
  • disable
  • dispatcher
  • dispose
  • dropdown
  • dynamic
  • EF
  • email
  • encoding
  • entity framework
  • enum
  • enumerable
  • Environment.NewLine
  • error
  • error 404
  • error handling
  • eveniment
  • event
  • ews
  • excel
  • exception
  • exchange
  • exita
  • explicit
  • export
  • extension
  • field
  • File.Exist
  • finalize
  • fire and forget
  • Fluent interface pattern
  • format
  • func
  • GC.SuppressFinalize
  • generic
  • getdirectoryname
  • globalization
  • gmail
  • hackathon
  • Hadoop
  • handle
  • HTML
  • html 5
  • Html.ActionLink
  • http://www.blogger.com/img/blank.gif
  • HttpModule
  • IComparable
  • IE
  • ienumerable
  • IIS
  • image
  • implicit
  • import
  • int
  • internationalization
  • Internet Explorer
  • interop
  • Ioc
  • IP Filter
  • iPhone
  • iQuest
  • IStructuralEquatable
  • ITCamp
  • itspark
  • java script
  • javascript
  • July 2012
  • KeyedByTypeCollection
  • KeyNotFoundException
  • Kinect SDK
  • lambda expression
  • LightSwitch Microsoft Silverlight
  • linq
  • list
  • lista
  • lista servicii
  • liste
  • Live Connect
  • Live ID
  • load
  • localization
  • lock
  • m-learning
  • MAC
  • Mango
  • map
  • mapare
  • mapare propietati
  • messagequeue
  • meta properties
  • method
  • MethodImpl
  • Metro App
  • Microsoft
  • Microsoft Sync Framework
  • mlearning
  • mlearning devices
  • Mobile Apps
  • mobile in the cloud
  • mobile learning
  • mobile services
  • Mobile Web
  • mongoDb
  • monitorizare
  • msmq
  • multitasking
  • MVC
  • MVC 3
  • MVVM
  • namespace
  • nextpartitionkey
  • nextrowkey
  • Ninject
  • nivel acces
  • no result
  • normalize
  • nosql
  • null expcetion
  • null object pattern
  • NullReferenceException
  • OAuth API
  • office
  • offline
  • Open ID
  • openhackeu2011
  • operations
  • operator
  • optimization
  • option
  • outputcache
  • OutputCacheProvider
  • override
  • paginare
  • pagination
  • path
  • persistare
  • Portable Library tool
  • Post event – CodeCamp Cluj-Napoca
  • predicate
  • predictions
  • prezentare
  • process
  • proiect
  • property
  • propietati
  • query
  • ReadOnlyCollection
  • ReadOnlyDictionary
  • referinta
  • reflection
  • remote
  • reply command
  • request
  • request response
  • resouce
  • REST
  • REST Client
  • RESTSharp
  • ronua
  • rss
  • rulare
  • salvare in fisier
  • sc
  • schimbare timp
  • select
  • select nodes
  • send
  • serializare
  • serialization
  • Server.Transfer. Resposen.Redirect
  • service bus
  • ServiceBase
  • servicecontroller
  • sesiune
  • session
  • Session_End
  • Session_Start
  • setup
  • Sibiu
  • signalR
  • Silverlight
  • sincronizare
  • Single Responsibility Principle
  • SkyDrive
  • skype
  • smartphones
  • smtp
  • Snapguide
  • sniffer
  • socket
  • solid
  • spec#
  • sql
  • Sql Azure
  • SQL CE
  • sql server 2008 RC
  • SRP
  • startuptype
  • stateful
  • stateless
  • static
  • stergere
  • store
  • store procedure
  • stream
  • string
  • string.join
  • struct
  • StructuralEqualityComparer
  • submit
  • switch
  • Symbian
  • Synchronized
  • system
  • tabele
  • table
  • techEd 2012
  • tempdata
  • test
  • testcleanup
  • testinitialize
  • testmethod
  • thread
  • timer
  • ToLower
  • tool
  • tostring
  • Total Cost Calculator
  • trace ASP.NET
  • transcoding
  • tuplu
  • tutorial
  • TWmLearning
  • type
  • unit test
  • unittest
  • UrlParameter.Optional
  • Validate
  • validation
  • verificare
  • video
  • view
  • ViewBag
  • virtual
  • visual studio
  • VM role
  • Vunvulea Radu
  • wallpaper
  • WCF
  • WebBrower
  • WebRequest
  • where clause
  • Windows
  • windows 8
  • Windows Azure
  • Windows Azure Service Management CmdLets
  • windows live messenger
  • Windows Mobile
  • Windows Phone
  • windows service
  • windows store application
  • Windows Task
  • WinRT
  • word
  • workaround
  • XBox
  • xml
  • xmlns
  • XNA
  • xpath
  • YMesseger
  • Yonder
  • Zip

Blog Archive

  • ▼  2013 (139)
    • ►  November (17)
    • ►  October (12)
    • ►  September (10)
    • ►  August (7)
    • ►  July (8)
    • ►  June (15)
    • ►  May (12)
    • ►  April (17)
    • ►  March (16)
    • ▼  February (9)
      • log4net - Some fun with appenders
      • Converters and IoC containers
      • Windows Azure Virtual Machine - Where is my data f...
      • Representing sub-system dependencies
      • Shared Access Signature and Access Level on Blob, ...
      • SQL Azure Export Process - Error SQL71562: Proce...
      • CRON job in Windows Azure - Scheduler
      • Workflows over Windows Azure
      • Scalability points on Cloud
    • ►  January (16)
  • ►  2012 (251)
    • ►  December (9)
    • ►  November (19)
    • ►  October (26)
    • ►  September (13)
    • ►  August (35)
    • ►  July (28)
    • ►  June (27)
    • ►  May (24)
    • ►  April (18)
    • ►  March (17)
    • ►  February (20)
    • ►  January (15)
  • ►  2011 (127)
    • ►  December (11)
    • ►  November (20)
    • ►  October (8)
    • ►  September (8)
    • ►  August (8)
    • ►  July (10)
    • ►  June (5)
    • ►  May (8)
    • ►  April (9)
    • ►  March (14)
    • ►  February (20)
    • ►  January (6)
  • ►  2010 (26)
    • ►  December (1)
    • ►  November (1)
    • ►  October (1)
    • ►  June (2)
    • ►  May (1)
    • ►  April (4)
    • ►  March (1)
    • ►  February (1)
    • ►  January (14)
Powered by Blogger.

About Me

Unknown
View my complete profile