3/17/2012

Verify Whether a SQL Server Agent Job is Running

DECLARE @jobname sysname  ='Running Job' -- Enter the job name here
SET NOCOUNT ON
IF NOT EXISTS (SELECT * FROM msdb..sysjobs Where Name = @jobname)
BEGIN
 PRINT 'Job does not exists'
END
ELSE
BEGIN
 CREATE TABLE #xp_results
 (
 job_id                UNIQUEIDENTIFIER NOT NULL,
 last_run_date         INT              NOT NULL,
 last_run_time         INT              NOT NULL,
 next_run_date         INT              NOT NULL,
 next_run_time         INT              NOT NULL,
 next_run_schedule_id  INT              NOT NULL,
 requested_to_run      INT              NOT NULL, -- BOOL
 request_source        INT              NOT NULL,
 request_source_id     sysname          COLLATE database_default NULL,
 running               INT              NOT NULL, -- BOOL
 current_step          INT              NOT NULL,
 current_retry_attempt INT              NOT NULL,
 job_state             INT              NOT NULL
 )
 INSERT INTO  #xp_results
 EXECUTE master.dbo.xp_sqlagent_enum_jobs 1, 'sa'
 IF EXISTS (
 SELECT 1 FROM #xp_results X
 INNER JOIN
 msdb..sysjobs J ON X.job_id = J.job_id
 WHERE x.running = 1 AND j.name = @jobname)
 BEGIN
  Print 'Job is Running'
 END
 ELSE
 BEGIN
 Print 'Job is not Running'
 END
 DROP TABLE #xp_results
END

Adhoc Reporting Using SSRS 2008 R2

By
Reporting Services comes with a built-in modeling tool called Report Model Designer which is used for developing adhoc reports. Due to their ease of creation, adhoc reports are often developed by end users as opposed to developers. The main purpose of the report model is to shred the dependency on the coding and give a semantic model to the user to  develop their own reports on the fly without consulting the developer.

How to use the Adhoc Reporting Model?

The report model can be developed for an OLTP or an OLAP database using a wizard based approach. The Report model developed using the model designer is then deployed on the report server for developing the adhoc reports. The model is accessed using the Report Builder tool which is freely available for download from Microsoft. Report Builder 3.0 allows the users to develop reports not only on the report model but also on the variety of data sources. Report Builder 3.0 comes with rich visualizations including the sparklines, databars, indicators, a variety of charts and graphs and also with a the map visualization.

Building the Report Model

Now that we know what the report model and report builder is let us have look how a report model is developed from scratch and the adhoc reports are built using the same. For this exercise we will use the AdventureWorks database.
 Open the Business intelligence development studio.
1. Click File->New Project->add Report Model project->Name it as Adhoc Reporting and click Ok.

2. Add Data Source by right clicking on the data sources folder->Add New Data Source
Description: C:\Franco's data\SSP\Adhoc Report2.jpg
3. Click New.
Description: C:\Franco's data\SSP\Adhoc Report3.jpg
4. Specify the server name and the Database as the AdventureWorks. Click Ok to add the data source.
Description: C:\Franco's data\SSP\Adhoc Report4.jpg
5. Click Next and then click Finish to add the AdventureWorks.ds as the data source to the solution explorer.
Description: C:\Franco's data\SSP\Adhoc Report5.jpg
6. Add the Data Source view – right-click the Data Source views->Add New Data Source view.
Note: A DSV lets you work on the database schema to change and extend it without affecting the underlying data source. You can add custom calculations and named queries (similar to view building in the database). Using the Explore Data feature you can view the content of the table in the Visual Studio’s BIDS. Explore Data allows you to manipulate the data using OWC components.

7. Click Next.

8. In this example the data source is a relational database. Select the required tables from the database. I have selected tables for sales analysis.

9. Click Finish to complete the wizard.


10. To add Report Model, right-click on the Report Models->Add New Report Model

11. Click Next.

12. Click Next , we will use the default rules for the model generation but you can always select the required rules for model. To understand the rules in more detail click help.

13.  The first option is recommended for the report model generation rather than using the current statistics of the model. Click Next.

14. Click Run to generate the model.

15. After the model is generated click Finish to complete the wizard.

16. Now to deploy the model to the ReportServer. Right-click on the Adhoc Reporting solution and go to properties to set the ReportServer properties.

17. Right Click on the Solution and Click Deploy.

18. Open Internet explorer and type http://localhost/Reports to open the report manager. Report Manager is the Web Application provided by Microsoft to manage the deployed reports. You can see the Models and Data Sources deployed on the report server. Click on the on the Report Builder link to start the Report Builder.

Building An Adhoc Report

1. The Report Builder will be downloaded on your machine after clicking run.

2. Report Builder 3.0 will open once it has downloaded. As you can see Report Builder 3.0 gives you four options to begin report building. There is a table or matrix wizard, chart wizard, map wizard and finally a blank report. Click on the blank report for this run-through.



3. Right click the data sources and Select the option Use shared connection or report model.

4. Browse to the Models folder and select the model AdventureWorks. Click Open and then OK.

5. Right-click on Datasets and add dataset. Click Select the dataset embedded in my report and click on the query designer.

6. The below dialog box opens where you can now drag and drop the fields from the left pane.

7. I have chosen sales group, sales current year and sales last year fields to analyze. Click OK.

8. Click Ok.

9. Thus using the report model and Report Builder the following report was created in less than 5 mins.

Optimizing C# for XAML Platforms

Georgi Atanasov and Tsvyatko Konov
When working with the C# development language, individual developers often find a process that works for them and stick to it. After all, if the code passes all our tests, we can ship it, right? Well, no. Releasing products in today’s complex programming landscape isn’t that simple. We think developers need to revisit some of the standard decisions we make about Extensible Application Markup Language (XAML) concepts such as dependency properties, LINQ and the layout system. When we examine these aspects from a performance perspective, various approaches can prove questionable.
By exploring dependency properties, LINQ performance and the layout system through some code examples, we can see exactly how they work and how we can get the best performance out of our applications by rethinking some common assumptions.

The Problem with Dependency Property Look-Up Time

DependencyProperty and DependencyObject are the fundamentals on which Windows Presentation Foundation (WPF), Silverlight and XAML are built. These building blocks provide access to critical features such as styling, binding, declarative UI and animation. In a typical program, we use them all the time. Every single bit of such high-end functionality comes at a price measured in performance, however, be it loading time, rendering speed or the application’s memory footprint. To support framework functionality that includes default values, styles, bindings, animations or even value coercion in WPF, the property system backing them up needs to be more complex than standard CLR properties.
The following steps occur during a DependencyProperty effective value look-up:
  • The structure holding the data for the specified property is retrieved from the property store.
  • Once the structure is retrieved, its effective value is evaluated -- is it a default value, a style, a binding or an animated value?
  • The final effective value is returned.
Figure 1 shows some measurements (in milliseconds) of CLR properties and DependencyProperty usage.
100,000 IterationsCLR PropertiesDependencyProperty
Set different values3 ms1062 ms
Set same value3 ms986 ms
Get value3 ms154 ms
Figure 1 DependencyProperty Get/Set Measurements
Note All measurements were performed on the Silverlight for Windows Phone platform, on a Samsung Omnia 7 device. We used a mobile device because of its lower hardware resources, where differences are more distinct.
Figure 2 shows the class used to perform the tests.
Figure 2 Simple “Control” Inheritor
  1. public class TestControl : Control
  2. {
  3.     public static readonly DependencyProperty TestIntProperty =
  4.         DependencyProperty.Register("TestInt"typeof(int), typeof(TestControl), new PropertyMetadata(0));
  5.     public int TestInt
  6.     {
  7.         get
  8.         {
  9.             return (int)this.GetValue(TestIntProperty);
  10.         }
  11.         set
  12.         {
  13.             this.SetValue(TestIntProperty, value);
  14.         }
  15.     }
  16. }
This test is the simplest one possible, without any styles, bindings or animations applied. If you try the same scenario on a ListBox, you’ll see even bigger numbers. It demonstrates that DependencyProperty usage is heavier and implies that performance pitfalls can result in enormous overhead. In applications that use extensive looping, this performance hit is even more apparent.

Solving the Dependency Property Look-Up Time Problem

The challenge is to keep all the value provided by the dependency property system and to improve the look-up performance at the same time. Two important yet simple optimizations can help improve the overall application performance.

Cache a DependencyProperty effective value in a member variable for later use

Figure 3 shows an extended version of the TestControl class.
Figure 3 Simple “Control” Inheritor with a Cached Property Value
  1. public class TestControl : Control
  2. {
  3.     public static readonly DependencyProperty TestIntProperty =
  4.         DependencyProperty.Register("TestInt"typeof(int), typeof(TestControl), new PropertyMetadata(0, OnTestIntChanged));
  5.     private int testIntCache;
  6.     public int TestInt
  7.     {
  8.         get
  9.         {
  10.             return this.testIntCache;
  11.         }
  12.         set
  13.         {
  14.             if (this.testIntCache != value)
  15.             {
  16.                 this.SetValue(TestIntProperty, value);
  17.             }
  18.         }
  19.     }
  20.     private static void OnTestIntChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
  21.     {
  22.         (d as TestControl).testIntCache = (int)e.NewValue;
  23.     }
  24. }
By adding a handler to listen for a change in the TestInt property and caching it in a field and in the getter, we return this cached field. When setting the value, we also check whether the value is the same -- if it’s not, the SetValue method isn’t called. The results when we perform the above measurements with the simple changes are shown in Figure 4.
100,000 IterationsCLR PropertiesDependencyProperty
Set different values3 ms1062 ms
Set same value3 ms4 ms
Get value3 ms3 ms
Figure 4 DependencyProperty Get/Set Measurements Using the Class from Figure 3
Only five lines of additional code resulted in a significant optimization. This optimization comes at a price, however. You sacrifice memory footprint (adding four more bytes to the object’s size with this additional field) for the sake of performance. The user probably won’t notice a slightly larger memory footprint but would definitely be aware of slower performance. The developer is responsible for evaluating the impact of either approach. If you have many dependency properties within your classes and you create many instances of these classes, more bytes within a single object can become a problem. If you use DependencyProperty sparingly, you don’t need to cache its effective value in a field.
Note Be careful when adding a Changed handler for a property. It will force the underlying framework to synchronize the property value with the UI thread, which can downgrade performance for properties whose values are animated. Also keep in mind that the “if” check in the property setter won’t work with bindings because the framework internally uses the SetValue(TestIntProperty, value) method rather than the property setter.

Cache a DependencyProperty effective value outside a loop

The example in Figure 3 works because we have our own class that we can modify as desired. But what if we have to use a DependencyProperty from an external library and we don’t have access to its source? We can handle this with another simple yet efficient optimization. Consider the following code:
  1. for (int i = 0; i < 100000; i++)
  2. {
  3.     if (this.ActualWidth == i) // "this" refers to a PhoneApplicationPage instance
  4.     {
  5.         // perform some action
  6.     }
  7.     else
  8.     {
  9.         // perform other action
  10.     }
  11. }
Do you see something that can be written more efficiently? Here is a slightly modified version of the preceding loop:
  1. double actualWidth = this.ActualWidth; // "this" refers to a PhoneApplicationPage instance
  2. for (int i = 0; i < 100000; i++)
  3. {
  4.     if (actualWidth == i)
  5.     {
  6.         // perform some action
  7.     }
  8.     else
  9.     {
  10.         // perform other action
  11.     }
  12. }
With this modified approach, we look up the value of the property only once and then use the value, cached in a local variable, to perform the “if” clause. The optimized results are shown in Figure 5.
100,000 IterationsLoop Time Elapsed
Before optimization750 ms
After optimization4 ms
Figure 5 Comparison of Loop Performance
Pretty impressive! This optimization is valid only if the ActualWidth value isn’t changed during the loop. If there’s a condition that could change this value, you need to update the variable upon the change or look it up every time if you don’t know when the change will occur.

Some final thoughts about DependencyProperty caching

Dependency properties are great, and they help make XAML the powerful framework it is. But don’t forget that in some cases they can degrade performance significantly. Always keep in mind the overhead the dependency property system brings into setting/getting a value and use the preceding tricks when appropriate. Estimate which properties need to be dependency ones and which can be simple CLR properties. For example, if you have a getter-only property, you don’t need to register a DependencyProperty for it, but rather a CLR property, and to raise the PropertyChanged notification to enable bindings.

Efficient Looping With -- and Without -- LINQ

Since LINQ was first released in 2007 as part of .NET Framework 3.5, developers seldom write their own loops anymore, relying on LINQ instead. LINQ is powerful, and its beauty is that it can be executed against different providers, such as Microsoft SQL Server, in-memory objects, XML or even your own custom provider implementing the IQueryable interface. We love LINQ—its framework libraries often spare us from writing many lines of code. Sometimes, however, we still have to write our own loops for the sake of performance. By properly estimating our algorithm complexity, we can recognize whether we can use LINQ or need our own loops.
For example, we can write code to solve a simple problem. Let’s say we need to find the minimum, maximum and average of an array of integers. As shown in Figure 6, using LINQ makes it simple and clean.
Figure 6 Finding Min/Max/Average Using LINQ
  1. private double[] FindMinMaxAverage(List<int> items)
  2. {
  3.     return new double[] { items.Min(), items.Max(), items.Average() };
  4. }
Using our own loop, the code would look like Figure 7. (Yes, there is much more code.)
Figure 7 Finding Min/Max/Average of a Sequence Using a Custom Loop
  1. private double[] FindMinMaxAverage(List<int> items)
  2. {
  3.     if (items.Count == 0)
  4.     {
  5.         return new double[] { 0d, 0d, 0d };
  6.         // we may throw an exception if appropriate
  7.         // throw new ArgumentException("items array is empty");
  8.     }
  9.     double min = items[0];
  10.     double max = min;
  11.     double sum = min;
  12.     for (int i = 1; i < items.Count; i++)
  13.     {
  14.         if (items[i] < min)
  15.         {
  16.             min = items[i];
  17.         }
  18.         else if (items[i] > max)
  19.         {
  20.             max = items[i];
  21.         }
  22.         sum += items[i];
  23.     }
  24.     return new double[] { min, max, sum / items.Count };
  25. }
When you compare the two routines, you can see the time advantage of our loop over LINQ, as shown in Figure 8.
MethodTime Elapsed
LINQ60 ms
Loop20 ms
Figure 8 Execution Time of Methods in Figure 6 and Figure 7
Are you surprised by the results? The LINQ implementation is highly efficient, and we couldn’t write a better performing loop to find the minimum of a sequence. But finding the minimum, maximum and average requires the LINQ approach to loop through the entire sequence three times. So the loop we wrote is three-times faster because of the difference in complexity -- the complexity of our loop is O (n) while the complexity of the method that uses LINQ is O (3n).

To LINQ or Not To LINQ

Figure 6 is a simple example that demonstrates the importance of complexity estimation in writing efficient code. We should always evaluate different solutions, analyze their pros and cons and decide which one to use in a particular context.
For example, if this method is used once or twice in the application, we obviously don’t need to write more code. But if it is used extensively, for example, in a charting engine, we should definitely opt for the second solution because it improves performance 200% per single method call.
A LINQ extension method generally has O (n) or O (nlogn) complexity, depending on the method. It also has some optimizations. For example, the Count method checks whether the sequence is ICollection; if yes, it returns its Count property directly, making the complexity a constant -- O (1). Because of its lazy initialization, it can merge queries, reducing the complexity.
OrderBy uses the same custom QuickSort implementation as in the List<T>.Sort method. We tried our own GroupBy implementation and gained only several milliseconds compared to LINQ. In this case, why would we bother writing our own grouping algorithm when LINQ already implements one efficiently?  You always need to measure the execution time of the methods you write, estimate how often the methods will be used and figure out whether you can write a better implementation.
A more complicated example with some XAML code illustrates where using LINQ adds enormous complexity. Consider the code in Figure 9.
Figure 9 Sample Code with Extensive LINQ Usage
  1. private void UpdateRelatedEntries(List<LinqViewModel> items)
  2. {
  3.     foreach (LinqViewModel item in items)
  4.     {
  5.         item.Related = items.Where(s => s != item)
  6.                             .OrderBy(s => s.Score).Reverse()
  7.                             .Take(5)
  8.                             .ToList();
  9.     }
  10. }
Can you guess the complexity of this “simple” code? Let’s analyze it:
foreach -> O (n)
  • Items.Where(s => s != item) -> O (n - 1)
  • OrderBy(s => s.Score) - > O ((n – 1)log(n – 1)) (quicksort)
  • Reverse() -> constant, will be merged with Take
  • Take(5) -> constant
  • ToList() -> constant since Array.Copy is used internally and only the time for memory allocation is spent
The overall complexity is O (n * ((n – 1) + (n – 1) * log (n – 1)). This approach provides more than a quadratic complexity, which is unacceptable and can be a show stopper in your application.
Now imagine adding a Select clause with some additional LINQ methods within its body. The complexity would be near cubical. Measuring the execution time of this method with our 100,000 entries results in a mind-blowing five-minute (and still running) freeze on our Omnia 7 screen. Some of you might be thinking that it isn’t realistic to experiment with 100,000 entries on a mobile device. And you’re right -- that’s a lot of data even for a desktop app. But the point remains that the complexity of this algorithm increases performance time exponentially.
Figure 10 shows the results from tests using 100, 1000 and 5000 items -- numbers more realistic for a mobile app.
ItemsTime Elapsed
10045 ms
10004222 ms
5000138829  ms == ~2.5 minutes
Figure 10 Execution Times for the Method in Figure 9

Optimizing the loop

The algorithm definitely needs to be improved. For starters, why do we need to do an expensive sort operation upon each iteration? We could just sort the items in descendant order once, outside the loop. As shown in Figure 11, this approach automatically removes the inner OrderBy and Reverse calls.
Figure 11 Optimization of the Method in Figure 9
  1. private void UpdateRelatedEntries(List<LinqViewModel> items)
  2. {
  3.     List<LinqViewModel> sortedDescendantItems = items.OrderBy(item => item.Score).Reverse().ToList();
  4.     foreach (LinqViewModel item in items)
  5.     {
  6.         item.Related = sortedDescendantItems.Where(s => s != item)
  7.                             // .OrderBy(s => s.Score).Reverse()
  8.                             .Take(5)
  9.                             .ToList();
  10.     }
  11. }
You might be wondering why we copy the result of the query in a list. The reason is a bit tricky. Because LINQ uses lazy initialization, the quick sort of the OrderBy call is performed when the iteration starts. If the result isn’t copied to a list (making the query iterate once), the quick sort is performed upon each iteration, even when the query is made outside the loop. Another consequence of lazy initialization is that the Take(5) and Where(…) clauses will be merged, making the overall execution time inside the loop constant.
Figure 12 shows what happens when we do the measurements with this optimization.
ItemsTime Elapsed
10013 ms
100029 ms
5000131 ms
Figure 12 Execution Times of the Method from Figure 11
That’s much better -- from an exponential complexity, we made it to ~O (nlogn). This is considered good in computer science. Still, if this were our code, we wouldn’t use LINQ in the loop’s body. Instead, we would write our own inner loop, add the first five items and then break the loop, as shown in Figure 13.
Figure 13 The Method from Figure 9 Implemented Using our Own Loops
  1. private void UpdateRelatedEntries(List<LinqViewModel> items)
  2. {
  3.     List<LinqViewModel> sortedDescendantItems = items.OrderBy(item => item.Score).Reverse().ToList();
  4.     for (int i = 0; i < items.Count; i++)
  5.     {
  6.         LinqViewModel item = items[i];
  7.         List<LinqViewModel> relatedItems = new List<LinqViewModel>(8);
  8.         for (int j = 0; j < sortedDescendantItems.Count; j++)
  9.         {
  10.             if (sortedDescendantItems[j] == item)
  11.             {
  12.                 continue;
  13.             }
  14.             relatedItems.Add(sortedDescendantItems[j]);
  15.             if (relatedItems.Count == 5)
  16.             {
  17.                 break;
  18.             }
  19.         }
  20.         item.Related = relatedItems;
  21.     }
  22. }
Time for measurements again. The results are shown in Figure 14.
ItemsTime Elapsed
1006 ms
100013 ms
500057 ms
Figure 14 Execution Times of the Method from Figure 13
Ah, the good old-fashioned loop -- nothing is faster. If you prefer LINQ and plan to use the method from Figure 9 only rarely, you’ll be fine. If you plan to use it extensively, however, you’re better off writing your own loop, which will complete its work in half the time.

Some final thoughts on looping with LINQ

LINQ is efficient, saves you from writing a lot of code, and is neat, clean and easily read. It also allows you to execute queries against different providers. But as demonstrated in Figure 9, it can also add undesired complexity and degrade performance. You’ll get into trouble if you think of LINQ as a single method call with constant execution time rather than the shortcut to different algorithms that it is. The complexity involved in looping is what matters. In some cases, you still need to write your own loops instead of relying on LINQ.

Working with the XAML Layout System

To create the page layout in the XAML layout system, each container is measured and then arranged. While being measured, each container recursively calls its children and asks for the desired size. Then all the elements are arranged on the available rectangle. This process happens through the corresponding methods MeasureOverride and ArrangeOverride.
Before going further into the layout system code, let’s look at the main layout panels, their capabilities and their implementation. Canvas, StackPanel and Grid are some of the layout panels you’ll be working with most often. Let’s also consider virtualization. We’ll show you how to use one of its applications in the face of VirtualizingStackPanel.
Canvas
Canvas is used to arrange items on one panel based on absolute offset. Canvas performs better than Grid and StackPanel because each child measure doesn’t rely on the desired size of other children. In most cases, however, the differences among Canvas, Grid and StackPanel are relatively small since each measures all their children. This control is suitable when its children rely only on its absolute position (e.g., simple designer/drawing tool). By default, children don’t rely on each other for their position. If the scenario requires children to depend on each other for their size and translate transform is applied to mimic a grid/list-like layout, you should consider other layout panels or custom implementation.
StackPanel
StackPanel is used to arrange child elements into a line that can be oriented horizontally or vertically. It provides functionality common for lists. StackPanel measures all items even if some of them aren’t visible. If you need the panel of an itemsControl to display many elements, consider using virtualized panels instead of StackPanel.
Grid
Grid is a panel used to arrange elements in a grid-like layout with both column and row definitions. Compared to Canvas and StackPanel, Grid has a heavier measure cycle. It is the default panel used for new UserControls and Windows.
Virtualization (VirtualizingStackPanel)
In most scenarios where performance is an issue, panels are used within ItemsControl for realizing multiple items. In these situations, the best approach is to provide a panel that supports virtualization, such as VirtualizingStackPanel. With this approach, the elements realized in the visual tree are limited to the items currently visible, cutting the time for measure/arrange.

Comparing the panels

We can create a real scenario in which panels are used as the container for an ItemsControl such as ListBox. Results will be reviewed for loading. The measurements show the duration of the layout cycle (Arrange + Measure). The code involved will be similar to that shown in Figure 15.
Figure 15 Code for Canvas, StackPanel, Grid and VirtualizingStackPanel
Canvas
<UserControl.Resources>
       <Style TargetType="ListBoxItem">
              <Setter Property="Canvas.Left" Value="10"></Setter>
       </Style>
</UserControl.Resources>
<ListBox  ItemsSource="{Binding Data}">
       <ListBox.ItemsPanel>
              <ItemsPanelTemplate>
                    <Canvas>
                    </Canvas>
              </ItemsPanelTemplate>
       </ListBox.ItemsPanel>
</ListBox>
StackPanel
<ListBox  ItemsSource="{Binding Data}">
       <ListBox.ItemsPanel>
              <ItemsPanelTemplate>
                    <StackPanel>
                    </StackPanel>
              </ItemsPanelTemplate>
       </ListBox.ItemsPanel>
</ListBox>
Grid
  1. <UserControl.Resources>
  2.        <Style TargetType="ListBoxItem">
  3.               <Setter Property="Grid.Column" Value="1"></Setter>
  4.        </Style>
  5. </UserControl.Resources>
  6. <ListBox  ItemsSource="{Binding Data}">
  7.        <ListBox.ItemsPanel>
  8.               <ItemsPanelTemplate>
  9.                     <Grid>
  10.                            <Grid.ColumnDefinitions>
  11.                                   <ColumnDefinition></ColumnDefinition>
  12.                                   <ColumnDefinition></ColumnDefinition>
  13.                            </Grid.ColumnDefinitions>
  14.                     </Grid>
  15.               </ItemsPanelTemplate>
  16.        </ListBox.ItemsPanel>
  17. </ListBox>
VirtualizingStackPanel
<ListBox  ItemsSource="{Binding Data}">
       <ListBox.ItemsPanel>
              <ItemsPanelTemplate>
                    <VirtualizingStackPanel>
                    </VirtualizingStackPanel>
              </ItemsPanelTemplate>
       </ListBox.ItemsPanel>
</ListBox>
The results (in seconds) are shown in Figure 16.
Panel Type100 Items1000 Items5000 Items
Canvas0.7254.728.95
StackPanel0.76454.68133329.597
Grid0.7374.812529.694
VirtualizingStackPanel0.5960.5676670.5965
Figure 16 Execution Times of Panels in Figure 15
As you can see, UI virtualization can greatly increase the performance. The logic involved in handling a custom layout is insignificant when compared to the overhead of measuring down the entire visual tree.

General suggestions for custom panels

When the default panels don’t meet our performance or behavior needs, we often create our own custom panels. Here are some suggestions to consider when creating custom panels:
  • Use virtualization when the scenario allows because it can drastically reduce load/response time.
  • Use InvalidateMeasure wisely. It triggers a layout update to the element and all its children. It also triggers both Measure and Arrange cycles.
  • When implementing custom panels, be careful using layout-specific properties, such as ActualWidth, ActualHeight, Visibility and so on. They can cause a LayoutCycleException.
  • When displaying hierarchical UI elements, consider arranging the items in one container. Using nested UI Elements/Panels increases the measure cycle time because unmanaged code is called in each nested level.

Conclusion

As you have seen, taking a closer look at the code you’re using in your XAML applications can help you make some changes to your usual coding processes that can enhance performance throughout your applications. If you understand the complexity of the dependency property system, you can optimize your code for faster retrievals. If you know exactly how LINQ uses collections, you can make decisions that lead to faster and more efficient loops. Finally, if you’re aware of how the layout system operates and how to optimize custom controls when you need them, you can create more responsive XAML applications.

Build Mobile-Friendly HTML5 Forms with ASP.NET MVC 4 and jQuery Mobile

Rachel Appel
Last month in my MSDN Magazine Web column, I covered how to get started with the latest tools for Microsoft Web development: HTML5, jQuery Mobile and ASP.NET MVC 4. In this issue, I’ll explain how to create mobile-friendly HTML5 forms in ASP.NET MVC 4 projects that also use jQuery Mobile.

Mobilized Web Project Templates in Visual Studio 2010

The MVC 4 Mobile project template in Visual Studio 2010 contains all the files and references necessary to create a mobile-friendly Web site. When you create a new MVC 4 Mobile project, you’ll notice the familiar Models, Views and Controllers folders requisite for all MVC 4 projects (mobile or not) alongside new or modified scripts in the \Scripts folder. The \Scripts folder is where you’ll find the many jQuery files that serve as an API for building mobile-friendly Web sites, in particular, the jquery.mobile-1.0b2.js file for development and its minified partner, jquery.mobile-1.0b2.min.js, for deployment.
The \Content folder contains the location for style sheets, images and design-related files. Keep in mind that the jquery.mobile-1.0b2.css style sheet defines a look and feel that specifically targets multiple mobile platforms. (See http://jquerymobile.com/gbs/ for a list of supported mobile and tablet platforms.) Much like JavaScript files, there are two style sheets: a fat version for development and a minified version for production.

Data Sources for HTML5 Forms: MVC 4 Models and ViewModels

Regardless of whether the target is mobile or desktop, HTML5 form elements map to a property of an entity in a model or a ViewModel. Because models expose varied data and data types, their representation in the user interface requires varied visual elements, such as text boxes, drop-down lists, check boxes and buttons. You can see the full set of available controls or elements at the jQuery Mobile Web site’s Form Element Gallery.
Simple forms that contain only text inputs and buttons are not the norm. Most forms have several types of data. Because of this data variety, coding and maintenance will be easier if you use a ViewModel. ViewModels are a combination of one or more types that together shape data that goes to the view for consumption and rendering.
Let’s say you want to build a quick way for users of your Web site to provide feedback. You need to collect the user’s name, the type of feedback the user wants to leave, the comment itself, and the priority of the comment—that is, whether or not it’s urgent. Figure 1 shows how the FeedbackModel class definition captures these features in simple data structures such as strings, an int, and a Boolean.
  1. public class FeedbackModel
  2. {
  3.     public string CustomerName { get; set; }
  4.     public int FeedbackType { get; set; }
  5.     public string Message { get; set; }
  6.     public bool IsUrgent { get; set; }
  7. }
Figure 1 Feedback Model
The FeedbackType property in Figure 1 is of type int, and it corresponds to the value the user selects at run time in the feedback type drop-down list defined in Figure 3.
Figure 2 contains the definition for the FeedbackViewModel, which is a combination of the FeedbackModel described in Figure 1 and the FeedbackType class (described in Figure 3).
  1. public class FeedbackViewModel
  2. {
  3.     public FeedbackModel Feedback { get; set; }
  4.     public FeedbackType FeedbackType { get; set; }        
  5.     public FeedbackViewModel()
  6.     {
  7.         Feedback = new FeedbackModel();
  8.         FeedbackType = new FeedbackType();
  9.     }
  10. }
Figure 2 Feedback ViewModel Containing the FeedbackModel and FeedbackType Properties
The use of the FeedbackType property highlights the purpose of ViewModels, which, as I mentioned earlier, is to shape disparate data sources or models together to form a single consumable source from the view, using strongly typed syntax.
While you can represent most of the data in a simple ViewModel as text boxes or check boxes, you also need to capture the type of feedback, which is a list of name-value pairs exposed in code as a more complex dictionary object. Figure 3 shows the FeedbackType class and the dictionary contained within it.
  1. public class FeedbackType
  2. {
  3.     public static SelectList FeedbackSelectList
  4.     {
  5.         get { return new SelectList(FeedbackDictionary, "Value""Key"); }
  6.     }
  7.     public static readonly IDictionary<stringint
  8.          FeedbackDictionary = new Dictionary<stringint
  9.     { 
  10.         { "Select the type ..."0 },
  11.         { "Leave a compliment"1 },
  12.         { "Leave a complaint"2 },
  13.         { "Leave some SPAM"3 },
  14.         { "Other"9 }
  15.     };
  16. }
Figure 3 FeedbackType Class, Including User Feedback Types
Now that the ViewModel is complete, the controller must pass it to the view for rendering. This straightforward code is in Figure 4 and is virtually identical to code that passes back a model.
  1. public ActionResult Feedback()
  2. {
  3.     FeedbackViewModel model = new FeedbackViewModel();
  4.     return View(model);
  5. }
Figure 4 Controller Passing the ViewModel to the View
The next step in the process is setting up the view.

Creating HTML5 Mobile Forms in ASP.NET MVC 4 Views

You use the standard Add New Item command in Visual Studio 2010 to create feedback.cshtml, the view that will host your HTML5 form. ASP.NET MVC 4 favors a development technique named convention over configuration, and the convention is to match the name of the action method (Feedback) in the controller in Figure 4 with the name of the view, that is, feedback.cshtml. You can find the Add New Item command from the shortcut menu in Solution Explorer or the Project menu.
Inside the view, various ASP.NET MVC 4 Html Helpers present components of the FeedbackViewModel by rendering HTML elements that best fit the data types they map to in the ViewModel. For example, CustomerName renders as a standard single-line text box, while the Message property renders as a text area. FeedbackType renders as an HTML drop-down list so that the user can easily select an item rather than manually enter it. Figure 5 shows that there is no lack of Html Helpers to choose from for building forms.
  1. @using (Html.BeginForm( "Results","Home")) {
  2.     @Html.ValidationSummary(true)
  3.     <fieldset>
  4.         <legend>Leave some feedback!</legend>
  5.         <div class="editor-label">
  6.             @Html.LabelFor(model => model.Feedback.CustomerName)
  7.         </div>
  8.         <div class="editor-field">
  9.             @Html.TextBoxFor(model => model.Feedback.CustomerName)
  10.             @Html.ValidationMessageFor(model => model.Feedback.CustomerName)
  11.         </div>
  12.         <div class="editor-label">
  13.             @Html.LabelFor(model => model.Feedback.FeedbackType)
  14.         </div>
  15.         <div class="editor-field">
  16.             @Html.DropDownListFor(model => model.Feedback.FeedbackType, 
  17.                  FeedbackType.FeedbackSelectList) 
  18.             @Html.ValidationMessageFor(model => model.Feedback.FeedbackType)
  19.         </div>
  20.         <div class="editor-label">
  21.             @Html.LabelFor(model => model.Feedback.Message)
  22.         </div>
  23.         <div class="editor-field">
  24.             @Html.TextAreaFor(model => model.Feedback.Message)
  25.             @Html.ValidationMessageFor(model => model.Feedback.Message)
  26.         </div>
  27.         <div class="editor-label">
  28.             @Html.LabelFor(model => model.Feedback.IsUrgent)
  29.         </div>
  30.         <div class="editor-field">
  31.             @Html.EditorFor(model => model.Feedback.IsUrgent)
  32.             @Html.ValidationMessageFor(model => model.Feedback.IsUrgent)
  33.         </div>
  34.         <p>
  35.             <input type="submit" value="Save" />
  36.         </p>
  37.     </fieldset>
  38. }
Figure 5 Html Helpers
With the ViewModel, controller and view, the form is now ready to test in the browser.

Testing the HTML Form on the Windows Phone 7 Emulator

Running a browser from Visual Studio is the easiest way to test the form, but the look and feel doesn’t behave in a very mobile-like way. For viewing the output and testing the form, the Windows Phone 7 Emulator works perfectly.
The HTML5 form displays in the Windows Phone 7 Emulator, as shown in Figure 6. You can enter a name, select a type from the drop-down list, fill in the comments and submit the form. Without modifications to the default styling provided by jQuery Mobile style sheets, the overall HTML5 form looks like the image on the left side of Figure 6. After tapping on the drop-down, the list of items looks like the image on the right side of Figure 6. Tapping a list item to select it returns the user to the form.
Interacting with the Windows Phone 7 Emulator
Figure 6 Interacting with the Windows Phone 7 Emulator
Submitting the form directs the browser to send the form information to the Home controller because of the call to the Html Helper, Html.BeginForm( "Results","Home"). The BeginForm method directs the HTTP request to the HomeController controller and then runs the Results action method, as the arguments denote.
Before the form submission process sends the HTTP Request to the server, however, client-side validation needs to happen. Annotating the data model accomplishes this task nicely. In addition to validation, data annotations provide a way for the Html.Label and Html.LabelFor helpers to produce customized property labels. Figure 7 details the entire data model with attributes for both validation and aesthetic annotations, and Figure 8 illustrates their results in the Windows Phone 7 Emulator.
  1. public class FeedbackModel
  2. {
  3.     [Display(Name = "Who are you?")]
  4.     [Required()]
  5.     public string CustomerName { get; set; }
  6.     [Display(Name = "Your feedback is about...")]
  7.     public int FeedbackType { get; set; }
  8.     [Display(Name = "Leave your message!")]
  9.     [Required()]
  10.     public string Message { get; set; }
  11.     [Display(Name = "Is this urgent?")]
  12.     public bool IsUrgent { get; set; }
  13. }
Figure 7 Complete Data Model with Annotations
Left: Data Annotation Validations; Right: Data Annotation Aesthetics
Figure 8 Left: Data Annotation Validations; Right: Data Annotation Aesthetics
You can customize the error message of the Required attribute to make the user interface friendlier. There are also many more annotations available in the System.Data.DataAnnotations namespace. If you can’t find a data annotation that fits your validation, aesthetic or security needs, inheriting from the System.Attribute class and extending it gives you that flexibility.

From the Phone to the Server Through HTTP POST

Once the user taps the submit button on the phone—and assuming the form passes validation—an HTTP POST Request is initiated and the data travels to the controller and action method designated in the Html.BeginForm method (as was shown in Figure 5). The sample from Figure 9 shows the controller code that lives in the HomeController and processes the data that the HTTP Request sends. Because of the power of ASP.NET MVC 4 model binding, you can access the HTML form values with the same strongly typed object used to create the form – your ViewModel.
  1. [HttpPost()]
  2. public ActionResult Results(FeedbackViewModel model)
  3. {
  4.     // calls to code to update model, validation, LOB code, etc...
  5.     return View(model);
  6. }
Figure 9 Capturing the HTTP POST Data in the Controller
When capturing HTTP POST data, data annotations once again assist in the task, since action methods that have no attribute stating the type of HTTP verb default to HTTP GET. 

Conclusion

Creating shiny new forms for mobile devices as well as desktops has never been easier with the partnership between ASP.NET MVC 4, jQuery Mobile and HTML5.
Next month we dig deeper into this example by collecting the feedback data and saving it back to a database using Entity Framework.

5-Minute Tour of CSS3 Background Gradients

By John Papa

Designing the presentation layer of an HTML5 application requires some familiarity with Cascading Style Sheets (CSS). You might want to set the background of an element or set of elements using the latest standard definition of the language, CSS3. The CSS3 background property sets the background for an element, but some of its features behave differently based on the CSS parsing by various browsers' rendering engines. This is where vendor prefixes and a little knowledge of how to apply them can be very handy. First I'll demonstrate how the CSS background property works; then introduce some tools you can use for gradients and show you how to make them work across different browsers. You can find the complete source code for this sample here.

Simple Scenario
Figure 1 shows a simple Timer in Microsoft Internet Explorer 9. The numbers should use a background to give it a little bit of depth. In this example, the background image is 1px wide and 72px tall.
Figure 1. Background image in Microsoft Internet Explorer 9.
By default a background is placed at the top-left corner of an element. The background is repeated both vertically and horizontally over the entire space the element covers.
Using an image is perhaps the easiest and most supported way to set the background. The syntax is quite simple, as shown in the bg CSS class here:
.bg
{
  background: url(/images/bg.png); 
}
This approach is the recommended way to use backgrounds in Internet Explorer because it doesn't support gradients (unless you're using Internet Explorer 10).
Gradient Fallback Color
However, if you don't have a background image prepared or you need your background to be flexible enough to change later, you could use gradients. The browser does not have to request and download the background image -- not a huge file, but still one more thing to download.
The catch with gradients is the major browsers all implement them differently.
It's important to first create a fallback color for the gradient for that "just in case" scenario. You might have missed the proper syntax that a specific browser requires, or the browser might not support gradients at all. In the following CSS example, the background color is set first as a fallback. Then if the browser supports the gradients and it finds the valid CSS syntax, it will use the gradient instead:
.bg
{
  /* fallback color */
  background-color: #000000;

  /* begin listing gradients */
  
}
Creating a Gradient in Chrome and Safari
There are a few ways you can write vendor-specific prefixes to handle gradients. One common way is to simply hand-type them. Another approach is to find a Web page that generates the CSS.
For example, take a look at the syntax to define a gradient using the latest version of the Google Chrome browser. Chrome looks for the vendor prefix for WebKit, in this case –webkit-gradient. You can then define the type of gradient, the coordinates for the angle of the gradient and any stop points to change the color:
/* chrome gradient syntax */
background: -webkit-gradient(
  type,
  x y,
  x y,
  color-stop(percentage, color),
  color-stop(percentage, color),
  color-stop(percentage, color)
  );
You can create a gradient from bottom to top, using a shade of blue for the background of the numbers with multiple color stops.
The following CSS declares that browsers supporting WebKit should use a linear gradient starting from the left bottom and angling to the left top. Effectively, this paints the gradient from the bottom to the top, which is what you want. Next, a series of color stops are defined. In this example, each color stop indicates the position in a percentage between 0.00 and 1.00 and the color. The first color at the bottom is defined by the stop at 0, which indicates that the start color is black using rgb(0,0,0). The next color stop is defined at 46 percent of the way up from the bottom and uses a shade of blue with rgb(11, 111, 211). The color will gradually change from black to blue between the bottom and the stop at 46 percent:
background: -webkit-gradient(
  linear,
  left bottom,
  left top,
  color-stop(0, rgb(0, 0, 0)),
  color-stop(0.46, rgb(11, 111, 211)),
  color-stop(0.46, rgb(0, 0, 0)),
  color-stop(0.50, rgb(0, 0, 0)),
  color-stop(0.50, rgb(11, 111, 211)),
  color-stop(1, rgb(0, 0, 0))
  );
The next color is also at 46 percent and goes back to black. Because there are two stops at the same point, this effectively tells the browser to switch colors immediately at that stopping point. The next color is at 50 percent. It's also black, which creates a thin, solid black line in the middle. Finally, there are two more color stops in which the color will gradually change from blue to black again from 50 percent all the way to the top at 100 percent. The net result is shown in Figure 2, which is using Chrome (but the CSS example also works in the WebKit-based Safari browser).
Figure 2. Gradient in Google Chrome 16.  
Creating a Gradient in Firefox
The latest versions of Mozilla Firefox also support gradients, but using a slightly different syntax. WebKit expects the type of gradient (for example, linear) to be indicated as a parameter while Mozilla requires the type to be defined in the vendor prefix:
background: -moz-linear-gradient(
  angle location,
  color percentage,
  color percentage,
  color percentage
  );
The angle and location don't indicate a starting and ending point, which was the case with WebKit. Instead, they simply indicate the angle (such as center) and where to start (bottom). Finally, multiple color stops are supported using colors and percentages (instead of decimals):

background: -moz-linear-gradient(
  center bottom,
  rgb(0, 0, 0) 0%,
  rgb(211, 111, 111) 46%,
  rgb(0, 0, 0) 46%,
  rgb(0, 0, 0) 50%,
  rgb(211, 111, 111) 50%,
  rgb(0, 0, 0) 100%
  );
This CSS example shows the exact same gradient as the previous example for WebKit, except the colors. It creates a red and black gradient for the background in Firefox, as shown in Figure 3.
Figure 3. Gradient in Mozilla Firefox 9
Putting It Together
When you put all of this together, the CSS class might look like the code in Listing 1.
Notice the fallback color is indicated first, so if the user's browser doesn't support any of the other options, then the background color will default to black with no gradient. Next, the background image is set, which will be used by browsers that don't support gradients. Finally, WebKit and Mozilla browsers will use the gradient syntax. So what you end up with is Internet Explorer 9 rendering the background image, Chrome and Safari rendering a blue/black gradient, and Firefox rendering a red/black gradient. I used different colors here simply to point out the differences when you run them in the browsers. I recommend that you use similar colors in your renderings in real-world apps.
What Else?
Tools can be very helpful in creating gradients. Several Web sites help you create gradients and even generate the vendor-specific syntax for the CSS. The following two sites are a good place to start:
Both of these Web sites generate the CSS for the different browsers I discussed. They also generate the syntax for the Internet Explorer 10 beta, which supports gradients.
As a side note, Visual Studio extensions such as Web Essentials will help you generate vendor-specific prefixes. If you've already defined some of the prefixes, Web Essentials will skip those and generate any missing ones. This extension does not support CSS3 background gradients, but the creator (Mads Kristensen) tells me he has it high on his list of upcoming features. Web Essentials does support CSS3 border-radius, among other standard definition features. It'll also fill in extensions for –moz, –webkit, –ms and –o. And that's Papa's perspective.