I set up a website with resource files in ASP.NET, MVC4 as it happens but makes little difference. This because the site is bilingual - Welsh and English.
Post launch the client wanted a content change which involved changing a resource file but, perhaps because of my implementation 'choices' the site did not dynamically pick up the .resx change. I thought, in my ignorance, that maybe I needed to force a recompile so changed the web.config. This didn't work either so now I probably need to understand better what is going on. Hence this blog entry.
Here's how I have the resources configured (as per the best practice google found for me):
- Build Action: Embedded resource
- Copy to output directory: do not copy
- Custom tool: PublicResXFileCodeGenerator
The fact there is a custom tool here leads me still to think recompilation is needed. A little more about the tool is here and here.
After which I had another "doh!" moment and came thought that I should simply copy the corresponding designer file as well as this is where the property will be accessed and the IDE is handling this for me. In which case I will then also need to force recompilation.
But no, still not working. What have I missed? Ok, so checking the bin directory there is also an xml file containing data relating to the resources. Let's try that as well. Hmmm, nope. Forcing recompilation one more time. Still no joy.
Ok so let's takea look with reflector - the resources are part of the assembly. So let's grab the deployed assembly (which is of identical size) and see if I can access the resources directly within it - they should be the old versions. They are. So the resources are literally embedded in the assembly but simply copying the related files up to the server and forcing a recompilation doesn't do all that is necessary to update the resource references, presumably due to this additional custom tool which Visual Studion integrates with but isn't part of the standard .NET compilation process? So I can just copy up the dll compiled locally. Which worked. Got there at last.
In my book this isn't ideal however and does raise the issue as well that maintenance life becomes a little more complex when you move to resource files, at least with this configuration. When/ if I get a chance I might have a play with other configurations. In the meantime, others have looked at already.
Bootstrap, other Javascript Tools/ Frameworks and Keeping Up With Appearances
I don't know about other developers but I have this 'technical development topics list' ... products/ technologies/ concepts I've come across in passing but don't seem to consistently manage to dig into. This is particularly an issue currently as ...
a) there is just so much "stuff" to learn about in the (Microsoft) web software development space. 10 years ago it was far easier to keep abreast of the main development technologies. Now it's not possible for one person to cover all the bases and specialisation is required, at least if you are going to dig into anything in any depth
b) closely related is the changing landscape of devices and development for those devices, which increases the aforementioned complexity still further; native vs cross platform/ device anyone?
Anyway, in an attempt to facilitate making some personal headway I thought if I try and blog about some of these topics, once a week say. A laudible idea but let's see how this goes. Not well so far as this post has been sitting here unfinished for a month!
The New Way
A few years ago the ASP.NET web dev picture was different in several ways, including that life for the developer was simpler. We had ASP.NET web forms with postbacks, server controls and associated viewstate. Those server controls gradually got better in terms of user experience. People complained about how the web forms approach didn't facilitate unit testing and about the clunkiness of the state management. Ajax became more popular and this didn't fit that well into the Web forms way. Similarly Javascript, particularly in the form of jQuery became more popular as processing moved to the client in the drive for the more responsive UX. RESTful services are becoming the order of the day.
The "cutting edge developer" called for a more testable framework with less 'plumbing' and more control. Now we have ASP.NET MVC (as well as Microsoft web pages) which seems a better fit with this new Javascript-centric world than web forms. Of course there is then the option of dumping Microsoft/ Visual Studio completely for client development, and there is increasingly the option to continue this at the server side with technologies such as node.js.
Back to the client. With increased use of Javascript/ reduced use of Microsoft plumbing code have come a host of competing "frameworks" to supposedly make life easier for the developer. If you don't spend half your life trying to work out which framework you should be using for a given project that is. K.Scott Allen (check out his Pluralsight videos) was on DotNetRocks recently and one of his hopes for the year was that the web dev landscape simplified. I agree ... if we could go a little way back to it being more obvious which frameworks to use, and when, whilst also maintaining the benefits of this "brave new world" this would surely be a happier situation. So, Scott rattled of a few technologies/ projects/ frameworks during that show so let's very briefly cover those and I'll plan to return to cover more of them, and probably other new ones that have popped up in the interim, in subsequent posts. Oh, and these are from my notes from the show to follow up on so I may have also added one or two more than were originally stated! Some of the headline descriptions provided by the tools' sites are not very useful but ...
- Knockout (http://knockoutjs.com/) - 'simplify dynamic JavaScript UIs by applying the MVVM pattern'
- Backbone (http://backbonejs.org/) - 'Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface.'
- Spine (http://spinejs.com/) - 'Build Awesome JavaScript MVC Applications' - useful overview description right there!
- Angular (http://angularjs.org/) - 'HTML enhanced for web apps!' - ditto.
- Masonry (http://masonry.desandro.com/) - 'A dynamic layout plugin for jQuery'
- Modernizr (http://modernizr.com/) - 'A JavaScript library that detects HTML5 and CSS3 features in the user’s browser'
- Bootstrap (http://twitter.github.com/bootstrap/) - 'Sleek, intuitive, and powerful front-end framework for faster and easier web development'
- CoffeeScript (http://coffeescript.org/) - 'CoffeeScript is a little language that compiles into JavaScript. Underneath all those awkward braces and semicolons, JavaScript has always had a gorgeous object model at its heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way. '
- Typescript (http://www.typescriptlang.org/) - 'TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source. '
- Skeleton - I used for my own website and elsewhere (http://www.getskeleton.com) 'A Beautiful Boilerplate for Responsive, Mobile-Friendly Development'. See also this tutorial I found useful.
- LESS (http://lesscss.org/) - 'LESS extends CSS with dynamic behavior such as variables, mixins, operations and functions.'
- Further, though perhaps a little different intended scope, Telerik's KendoUI has also caught my eye.
A little more on Bootstrap
So, lots of interesting play things in the arena of client web technologies, likely to help/ confuse us poor web developers but let's have a little closer look at Bootstrap. Of the above it is most similar to Skeleton but whereas Skeleton has specifically targeted CSS support for a 12 column 960px grid system with supporting media queries and a few extras in the form of consistent styling of buttons, forms and typography the scope of Bootstrap is a little larger, seemingly a superset including:
- Scaffolding Global styles for the body to reset type and background, link styles, grid system, and two simple layouts.
- Base CSS Styles for common HTML elements like typography, code, tables, forms, and buttons. Also includes Glyphicons, a great little icon set.
- Components Basic styles for common interface components like tabs and pills, navbar, alerts, page headers, and more.
- JavaScript plugins Similar to Components, these JavaScript plugins are interactive components for things like tooltips, popovers, modals, and more.
And that's enough for now. I hope to have a play shortly and report back further.
I've started adding my old technical articles to this blog ascribed the dates they were originally published but I'll list the articles here as well though some apparently weren't permalinks so may have to remember/ dig out the originals of these. Somewhat surprisingly given many articles date back to 2003 the vast majority are still relevant to varying degrees. Italics and/ or italicised comments indicate those which have suffered with the passage of time, e.g. mobile development has moved on apace and exams have a habit of being deprecated.
If you do link through there seems to be no rhyme nor reason as to what ratign an article gets, as far as I can see anyway!
The ratios of devices accessing content over the Internet has changed significantly over recent years. The chiefly impacting form factor here is the smartphone, with Apple driving matters with the iPhone and Android taking over in market share terms, and there are other players who will continue to attempt to challenge the current market dominance of the ‘big 2’ with perhaps the best bet being Microsoft though, admittedly, they have failed to make any significant impact with Windows Phone 7.X. This may change if Microsoft manage some decent and cross pollinating marketing of Windows 8, Windows RT and Windows Phone 8. I’m not holding my breath though. Hands up who knows the difference between WinRT and Windows RT, for example?
Anyway, each phone operating system and surrounding ecosystem has its strengths and weaknesses and I won’t enter into related discussions here. What I shall consider is the messy situation we have with ‘app’ development. The term ‘app’ has entered common parlance though I’m unsure what the shared understanding of the term actually is. Certainly this has been driven into the collective consciousness by Apple’s ‘AppStore’, and subsequently by the misleadingly named ‘Google Play’. Therefore ‘app’ refers to applications that are designed to be run on mobile devices – initially smartphones and then more recently on tablet devices such the iPad and the Nexus 7? Microsoft has jumped on the bandwagon with its similar Windows Phone Marketplace and, most recently, the Windows Store.
But ‘app’ just means application doesn’t it? So here each operating system has its own app store which delivers applications designed to be run on that operating system including, and this is key, a User eXperience consistence with the design values pertinent to the target device. Thus developers/ organisations, as part of their business model, may choose to target an individual platform for their application. If they have the right app each of the major platforms offers a significant market and this approach can work. The problem then comes if they wish to extend their market to other platforms – currently for the very best user experience they will need to develop that application in a quite different set of technologies which means that porting apps from one platform to another is expensive (I note that there are cross platform tools out there but, as far as I am aware, they remain largely unproven – see below).
Now switch to an alternate scenario of an organisation which is not targeting a platform but targeting an existing customer base and hence will find themselves in the situation of prioritising development of their apps for differing platforms. Take the example of a bank who wishes to produce an app for customer to perform account management. Why? Well for competitive advantage of course – to keep existing customers with them and to encourage new customers to them. They will need to produce and maintain multiple versions of the application for the different platforms: 2,3,4? Logically they would then continue to prioritise platforms based on the breakdown of their current/ targeted user base.
So, firstly is this situation any different from that with more traditional devices – Apple or Microsoft OS based desktop or laptop devices. Yes because the mobile nature of devices opens up so many more useful app scenarios and the app store concept has taken off. No in that as we had, and have, OS specific traditional client computing ‘apps’ – the solution was moving the applications to the web and to related cross platform technologies.
So there are two, related solutions to this problem area which is only going to get worse as the market further fractures with device form factors and operating systems:
- rather than having client applications specifically developed for each mobile OS let’s write them in HTML5 and related technologies. There has already been a bug push in the last 3years+ to push more and more functionality down to the client as devices became more powerful and this path offered more scalability than each client devices using significant computing resources. A caveat here – mobile devices offer significantly less computing resources than your desktop client, though this is changing quickly. Technology has a habit of doing this …
- rather than having client applications specifically developed for each mobile OS let’s write them in a generic fashion and rely more on using tools and technology to ‘translate’ these apps to work in a variety of client devices.
Or, probably, a bit of both. The downside? Well, there is device specific knowledge and trickery to ensuring optimal user experiences (particularly) in apps. Will the user experience be good enough for end users via cross-platform development solutions? I hope so. The current situation can’t be sustainable, can it?
Chris Sully
Technical Director, Propona
[first published: https://connect.innovateuk.org/web/propona/blog/-/blogs/apps-operating-systems-and-devices?ns_33_redirect=%2Fweb%2Fpropona%2Fblog]
Note that this article was first published on 02/01/2003. The original article is available on DotNetJohn.
Introduction
The XML Schema definition language (XSD) enables you to define the structure (elements and attributes) and data types for XML documents. It enables this in a way that conforms to the relevant W3C recommendations for XML schema. XSD is just one of several XML schema definition languages but is the one best supported by Microsoft in .NET.
The schema specifies the ordering of tags in the document, indicates fields that are mandatory or that may occur different numbers of times, gives the datatypes of fields and so on. The schema importantly is able to ensure that data values in the XML file are valid as far as the parent application is concerned.
Schemas are also useful when developers in different companies or even in different parts of the same company read and write XML documents that they will share. The schema acts as a contract specifying exactly what one application or part of an application must write into an XML file and another program can expect to be there. The schema unambiguously states the correct format for the shared XML.
A well formed XML document is one that satisfies the usual rules of XML. For example, in a well formed document there is exactly one data root node, all opening tags have corresponding closing tags, tag names do not contain spaces, the names in opening and closing tags are spelt in exactly the same way, tags are properly nested, etc.
A valid document is one that is well formed and that satisfies a schema.
Visual Basic .NET provide several methods for validating an XML document against a schema. There are articles on how to do this already on dotnetjohn ( Upload an XML File and Validate Against a Schema ). The focus of this article however shall be on the basic elements of the Microsoft preferred schema language (XSD), after a brief history lesson / an introduction to other common types of schema you may come across and why the XSD alternative was developed.
DTD and XDR
While XML is a relatively new technology the need for schemas was recognised early and so several have already been created. Microsoft focuses heavily on the most recent version, XSD, so VB has the most support for this form of schema, and hence will probably be the one Microsoft developers use most.
However, you may well happen upon the situation, particularly with enterprise development, where you are required to work with other forms of XML schema. While VB has few tools for building other types of schema, it can validate data using DTD and XDR.
The first schema standard was developed alongside XML v1.0 and is DTD (Document Type Definition) schemas. This, many believed, was not an ideal solution as a schema definition language which is why Microsoft came up with XSD as its own suggested replacement and submitted this to the W3C for consideration. One of the problems was, and is, that DTDs are not XML based so you have yet another language to learn to go with the proliferation that comes with XML (XPath and XSL for example). Further, developers also found that DTD lacked the power and flexibility they needed to completely define all of the datatypes they wanted to represent in XML. A schema that can’t validate all of the data’s requirements is of limited use.
XDR (XML Data Reduced) is another schema language, this time XML based and providing a superset of the functionality of DTDs. XDR should not be confused with Sun’s XDR (External Data Representation) … another format for data description but in this case physical representation of data rather than logical representation as per XML and XML Data Reduced schemas.
The last few paragraphs were just to let you know there are other schema formats out there, some of which have limited support in .NET. Now we’ll focus on XSD.
XSD
Wherever you see 'schema' from now we’re referring to XSD. As per many topics relating to XML (see my article on XSL Understanding How to Use XSL Transforms) the XSD specification is complex as well as being quickly evolving. The following will cover the basics of XSD so you can start to construct some useful schemas for use in your own applications. You’ll then need to follow up the information presented elsewhere.
Note that Visual Studio .NET includes an XSD editor that makes generating schemas relatively painless. Unless you understand some of the basic rules of XSD, however, the editor may prove a tad confusing.
Types and Elements
XSD schemas contain type definitions and elements. A type definition defines an allowed XML data type. An 'address' might be an example of a type you might want to define. An element represents an item created in the XML file. If the XML file contains an Address tag, then the XSD file will contain a corresponding element named Address. The data type of the Address element indicates the type of data allowed in the XML file’s Address tag.
Type definitions may be simple or complex. Simple and complex types allow definition of the new data types in addition to the 19 built in primitive data types which include string, Boolean, decimal, date, etc.
A simpleType allows a type definition for a value that can be used as the content of an element or attribute. This data type cannot contain elements or have attributes.
A complexType allows a type definition for elements that can contain attributes and elements.
Let’s pause here and take a look at an example. Let’s work backwards from an XML document as I’ll assume we’re all reasonably familiar with XML but less so with XSD. Here’s an XML file representing a simplified contacts database containing just one record currently:
<?xml version="1.0" encoding="utf-8" ?>
<Contacts>
<Contact>
<FirstName>Chris</FirstName>
<Surname>Sully</Surname>
<Address>
<Street>22 Denton Road</Street>
<City>Cardiff</City>
<Country>Wales</Country>
</Address>
<Tel>02920371877</Tel>
</Contact>
</Contacts>
In fact in Visual Studio .NET you can simply right click on this XML file and generate the schema from it. Of course it may well not be quite correct for your needs, as it shall be based on one record of data. Not even Visual Studio .NET can predict the future with any accuracy … ;) Here’s what it comes up with:
<?xml version="1.0" ?>
<xs:schema id="Contacts" targetNamespace="http://tempuri.org/XMLFile1.xsd" xmlns:mstns="http://tempuri.org/XMLFile1.xsd" xmlns="http://tempuri.org/XMLFile1.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" attributeFormDefault="qualified" elementFormDefault="qualified">
<xs:element name="Contacts" msdata:IsDataSet="true" msdata:Locale="en-GB" msdata:EnforceConstraints="False">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="Contact">
<xs:complexType>
<xs:sequence>
<xs:element name="FirstName" type="xs:string" minOccurs="0" />
<xs:element name="Surname" type="xs:string" minOccurs="0" />
<xs:element name="Tel" type="xs:string" minOccurs="0" />
<xs:element name="Address" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Street" type="xs:string" minOccurs="0" />
<xs:element name="City" type="xs:string" minOccurs="0" />
<xs:element name="Country" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema>
Picking out some key elements:
An XML Schema is composed of the top-level schema element. The schema element definition must include the following namespace:
http://www.w3.org/2001/XMLSchema
and you can see from the above this isn’t all that is generated, but we’ll ignore the extra elements for now.
The actual definition commences with the first <xs:element… definition which has the name attribute 'Contacts'. Again the other attributes we can ignore for now. Contacts is necessarily defined as a complex type as it contains other elements.
We then encounter <xs:choice…: a choice element allows the XML file to contain one of the elements inside the choice element. The attribute maxOccurs="unbounded" is used to indicate that the Contacts element can contain any number of Contact elements.
The contact element is again a complex type comprised of a sequence of further elements. A sequence element requires the XML document to contain the items inside the sequence in order. By default sequence elements must appear exactly once; this can be overridden using the minOccurs and maxOccurs attributes to indicate it can occur any number of times (including 0).
The individual elements are defined to be of type string (a simple type). Address is similar defined as a complex type of a sequence of elements of type string.
Hopefully that has been an informative introduction to some commonly encountered constructs by way of an example. We’ll now continue on to look at some of the XSD language constructs in a little more detail, starting with elements.
Elements and their attributes
e.g. <xs:element name="Street" type="xs:string" minOccurs="0" />
An element defines an entity in an XML file. So the above defines an element of name <Street> and type string. The element can have several attributes which modify the elements behaviour, for example:
minOccurs and maxOccurs: as indicated already these give the minimum and maximum allowed number of times an element can occur within a complex type. To make an element optional minOccurs is set to 0. To allow an unlimited number of the element maxOccurs is set to 'unbounded'.
ref: makes the element a copy of another element defined in the schema. This is best avoided however... it is better to define a distinct type and base both element definitions on this type rather than introduce such dependencies into the schema, e.g.
<xsd:simpleType name=”PhoneNumberType”>
<xsd:restriction base=”xsd:string” />
</xsd:simpleType>
...
<xsd:complexType name=”Contact”>
<xsd:sequence>
...
<xsd:element name=”HomeTel” type=”PhoneNumberType”/>
<xsd:element name=”WorkTel” type=”PhoneNumberType”/>
...
<xsd:sequence>
</xsd:complexType>
Though you might at the same time like to tie down your definition of the PhoneNumberType more tightly. We’ll return to <xsd:restriction … shortly.
default: assigns a default value to the element in which case if the XML document omits the corresponding field it will be assumed to have this value. An element that has a default value should also have minOccurs set to 0 so the XML document may omit it.
fixed: gives the element an unchangeable value. The corresponding XML element cannot have another value, although it may be omitted if minOccurs is 0. Why is this useful? Well, you may want to ensure that an XML data field has the same value throughout the document; for example, you may want to add a new Country field to an existing XML document and ensure that its value is UK for every record.
Types
Type definitions have two goals:
- To describe the data allowed in a simple field, e.g. text format of an e-mail address. Simple types achieve this goal.
- To describe relationships amongst different fields, e.g. a contact type consists of a sequence of firstname, surname, telephone, etc. Complex types achieve this goal of designing more complex data types.
In addition to simple and complex types there are built in types, similar to simple data types such as integers, dates, etc. in other programming languages or .NET’s value data types provided by the Common Type System (CTS). We’ve seen one of the built in types already in our element definitions in the form of the often-employed string type. These built in types are W3C defined and include date, dateTime, decimal, double, float, Year, etc. etc. See the SDK documentation for an authoritative list.
A facet is a characteristic of a data type that you can use to restrict the values allowed by a type. Facets are effectively attributes of the data type. For example, the string datatype has a maxLength facet. Again for further details of the facets of each built in type see the SDK documentation.
Facets enable short cuts to building simple types by restricting another data type. We’ve already seen the restriction construct in example code above; using this and the enumeration facet of the string built in data type we can define allowable values for a type, e.g.
<xsd:simpleType name=”Colours”>
<xsd:restriction base=”xsd:string”>
<xsd:enumeration value=”red” />
<xsd:enumeration value=”green” />
<xsd:enumeration value=”blue” />
</xsd:restriction>
</xsd:simpleType>
The pattern facet is particularly powerful as it specifies a regular expression that the XML field data must match. Regular Expressions are worthy of an article or three in themselves, and there are several books on the subject if interested in improving your knowledge. Look out for an article on regular expressions on dotnetjohn in the not too distant future! For now, we’ll largely skip over the topic of regular expressions though here is an example:
<xsd:simpleType name=”emailType”>
<xsd:restriction base=”xsd:string”>
<xsd:pattern value=”[^@]+@[^@]+\.[^@]+” />
<xsd:restriction>
<xsd:simpleType />
Let’s decipher ”[^@]+@[^@]+\.[^@]+” – this matches an e-mail address of the form a@b.c where a,b and c are any strings that do not contain the @ symbol. The value string equates to 'match any character other than the @ symbol one or more times; then match an @ symbol; then again match any character other than the @ symbol one or more times; next match a full stop and then once more any character other than the @ symbol one or more times'.
The use of the length, minLength, maxLength, totalDigits, fractionDigits, minExclusive, maxExclsuive, minInclusive, maxInclusive facets are all self-describing but it’s important to know they are available.
In addition to the primitive built in types there exist built in data types derived from these primitive types. These derived built in data types refine the definition of primitive types to create more restrictive types. They are based on the string and decimal primitive types.
The string derived types represent various entities that occur in XML syntax itself. For example, the Name type represents a string that satisfies the form of XML token names – it begins with a letter, underscore or colon and the rest of the string contains letters and digits.
The decimal derived types represent various kinds of numbers and thus are considerably more useful for validating data. There are thirteen such decimal derived types, e.g. byte, int, negativeInteger. See the SDK documentation for the full list.
Attributes
Just as you use an XSD schema’s element entities to define the data that can be contained in the corresponding XML data elements you can use attribute entities to define the attributes the XML element can have. Let’s return to Visual Studio .Net and see what schema it comes up with for the following small attribute-centric piece of XML.
<contacts>
<contact firstname=”Chris” Surname=”Sully” />
</contacts>
<?xml version="1.0" ?>
<xs:schema id="contacts" targetNamespace="http://tempuri.org/attribute_centric.xsd" xmlns:mstns="http://tempuri.org/attribute_centric.xsd" xmlns="http://tempuri.org/attribute_centric.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" attributeFormDefault="qualified" elementFormDefault="qualified">
<xs:element name="contacts" msdata:IsDataSet="true" msdata:Locale="en-GB" msdata:EnforceConstraints="False">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="contact">
<xs:complexType>
<xs:attribute name="firstname" form="unqualified" type="xs:string" />
<xs:attribute name="Surname" form="unqualified" type="xs:string" />
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema>
You can see that attributes equate to the
<xs:attribute name="firstname" form="unqualified" type="xs:string" />
construct. The form attribute of the attribute tag (yes that does make sense!) is set to unqualified. This means that the attributes in the XML file do not need to be qualified by the schema’s namespace.
Why use attributes rather than elements (referred to as attribute-centric and element-centric XML)? Well, they are often interchangeable and it is largely a matter of taste. Generally however, elements should contain data and attributes should contain information that describes the data. So for contacts one could recommend an attribute centric approach.
However, such decisions are mitigated by the following:
- the attribute centric approach consumes less file space
- attributes can specify default values whereas elements generally do not
- you can order elements via the sequence construct; there is no method of enforcing order with attributes
- elements can occur more than once in complex types but an attribute can occur only once
Complex Types
As previously stated, whereas a simple type determines the type of data a simple text field can hold, a complex type defines relationships among other types. For example we defined a contact record to include fields to store first name and surname. Simple types are then used to define the allowable values in the fields. The complex type determines the fields that make up the contact type.
Complex types are also useful for defining XML elements that can have attributes – simple types cannot have attributes. A complex type can contain only one of a small number of elements. The elements within that element define the relationship the complex type represents. The most common elements are simpleContent, sequence, choice and all, as follows:
simpleContent: a complex type that contains a simpleContent element must contain only character data or a simple type. This construct is primarily so one may add attributes to a simple type.
sequence: as we’ve seen this allows specification of a required order to elements of a complexType.
choice: again as we’ve seen the corresponding XML data must include exactly one of the elements listed inside the choice. Note this is entirely different from the enumeration facet previously introduced: rather than a fixed set of values the choice construct allows the complex type to contain one of several types.
all: when a complex type includes the all element the corresponding XML data can include some or all of the listed elements in any order.
Named and Unnamed Types
Finally, to finish off our overview: if you will use a type only once there is no need to give it a name as you will not need to reference it again. You may choose to include the type definition in the code which uses it. This is the case for the Visual Studio generated schemas above, e.g.:
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="contact">
<xs:complexType>
<xs:attribute name="firstname" form="unqualified" type="xs:string" />
<xs:attribute name="Surname" form="unqualified" type="xs:string" />
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
If the schema referenced this complexType again it would be more succinct to add a name attribute to the <xs:complexType … definition so you could reference it again later in the schema definition.
To clarify by example, the following uses the email simple type we introduced earlier to reduce the size of an ‘EmailContactType’ definition:
<xsd:simpleType name=”emailType”>
<xsd:restriction base=”xsd:string”>
<xsd:pattern value=”[^@]+@[^@]+\.[^@]+” />
<xsd:restriction>
<xsd:simpleType />
<xsd:complexType name=”emailContactType”>
<xsd:sequence>
<xsd:element name=”name” type=”xsd:string” />
<xsd:element name=”email” type=”emailType” />
</xsd:sequence>
</xsd:complexType>
Alternatively you could have defined the email type within the email element. Whether reusing or not, employing this convention will generally make your code tidier.
Conclusion
There we shall halt our introduction to XML Schemas and to the basic XSD constructs specifically and hope you are better placed to understand why and how to use XML schemas in .NET
References
.NET Framework SDK documentation
Visual Basic .Net and XML
Stephens and Hochgurtel
Programming Visual Basic .NET
Francesco Balena
Microsoft Press
Introduction
Note that this article was first published on 30/03/2003. The original article is available on DotNetJohn.
I came to the realisation a little time ago that I really wasn’t making the most of the error handling facilities of ASP.NET. The sum of my ASP.Net error handling knowledge up until that point was the new (to VB) Try … Catch … Finally construct at the page level. While I shall examine this new addition there are more facilities at our disposal and this article shall share the results of my recent investigations.
In anything but the simplest of cases your application WILL contain errors. You should identify where errors might be likely to occur and code to anticipate and handle them gracefully.
The .NET Framework’s Common Language Runtime (CLR) implements exception handling as one of its fundamental features. As you might expect due to the language independence of the framework you can also write error handling in VB.NET that can handle errors in C# code, for example. The exception object thrown is of the same primitive type (System.Exception).
Options … Options (preventing runtime errors)
If we can we should capture errors as early as possible as fewer will then make it through to the runtime environment. VB.Net offers the Option Strict and Option Explicit statements to prevent errors at design time.
Classic ASP programmers should be familiar with Option Explicit – it forces explicit declaration of all variables at the module level. In addition when programming ASP.NET pages in VB.NET when Option Explicit is enabled, you must declare all variables using Public, Private, Dim or Redim.
The obvious mistake that Option Explicit captures is using the same variable name multiple times within the same scope … a situation which is very likely to lead to runtime errors, if not exceptions.
Thankfully, in ASP.NET Option Explicit is set to on by default. If for any reason you did need to reset, the syntax is:
Option Explicit Off
at the top of a code module or
<%@ Page Explicit=”False”%>
in a web form.
Enabling Option Strict causes errors to be raised if you attempt a data type conversion that leads to a loss in data. Thus, if this lost data is of potential importance to your application Option Strict should be enabled. Option Strict is said to only allow ‘widening’ conversions, where the target data type is able to accommodate a greater amount of data than that being converted from.
The syntax is as per Option Explicit.
Exceptions
What precisely is an exception? The exception class is a member of the System namespace and is the base class for all exceptions. Its two sub-classes are the SystemException class and the ApplicationException class.
The SystemException class defines the base class for all .NET predefined exceptions. One I commonly encounter is the SQLException class, typically when I haven’t quite specified my stored procedure parameters correctly in line with the stored procedure itself.
When an exception object is thrown that is derived from the System.Exception class you can obtain information from it regarding the exception that occurred. For example, the following properties are exposed:
HelpLink |
Gets or sets a link to the help file associated with this exception. |
InnerException |
Gets the Exception instance that caused the current exception. |
Message |
Gets a message that describes the current exception. |
Source |
Gets or sets the name of the application or the object that causes the error. |
StackTrace |
Gets a string representation of the frames on the call stack at the time the current exception was thrown. |
TargetSite |
Gets the method that throws the current exception. |
See the SDK documentation for more information.
The ApplicationException class allows you to define your own exceptions. It contains all the same properties and methods as the SystemException class. We shall return to creating such custom exceptions after examining the main error handling construct at our disposal: 'Try … Catch … Finally'.
Structured and Unstructured Error Handling
Previous versions of VB had only unstructured error handling. This is the method of using a single error handler within a method that catches all exceptions. It’s messy and limited. Briefly, VB’s unstructured error handlers (still supported) are:
On Error GoTo line[or]label
On Error Resume Next
On Error GoTo 0
On Error GoTo -1
But forget about these as we now have new and improved C++ like structured exception handling with the Try ... Catch … Finally construct, as follows:
Try
[ tryStatements ]
[ Catch [ exception [ As type ] ] [ When expression ]
[ catchStatements ] ]
[ Exit Try ]
...
[ Finally
[ finallyStatements ] ]
End Try
Thus we try to execute some code; if this code raises an exception the runtime will check to see if the exception is handled by any of the Catch blocks in order. Finally we may execute some cleanup code, as appropriate. Exit Try optionally allows us to break out of the construct and continue executing code after End Try. When optionally allows specification of an additional condition which must evaluate to true for the Catch block to be executed.
Here’s my code snippet for SQL server operations by way of a simple, and not particularly good (see later comments), example:
Try
myConnection.open()
myCommand = new SQLCommand("USP_GENERIC_select_event_dates", myConnection)
myCommand.CommandType = CommandType.StoredProcedure
myCommand.Parameters.Add(New SQLParameter("@EventId",SQLDBType.int))
myCommand.Parameters("@EventId").value=EventId
objDataReader=myCommand.ExecuteReader()
Catch objError As Exception
'display error details
outError.InnerHtml = "<b>* Error while executing data command (ADMIN: Select Event Dates)</b>.<br />" _
& objError.Message & "<br />" & objError.Source & _
". Please <a href='mailto:mascymru@cymru-web.net'>e-mail us</a> providing as much detail as possible including the error message, what page you were viewing and what you were trying to achieve.<p /><p />"
Exit Function ' and stop execution
End Try
There are several problems with this code as far as best practice is concerned, the more general of which I’ll leave to the reader to pick up from the following text, but in particular there should be a Finally section which tidies up the database objects.
Note it is good form to have multiple Catch blocks to catch different types of possible exceptions. The order of the Catch blocks affects the possible outcome … they are checked in order.
You can also throw your own exceptions for the construct to deal with; or re-throw existing exceptions so they are dealt with elsewhere. See the next section for a little more detail.
You could just have a Catch block that trapped general exceptions – exception type ‘exception’ (see above!). This is not recommended as it suggests a laziness to consider likely errors. The initial catch blocks should be for possible specific errors with a general exception catch block as a last resort if not covered by earlier blocks.
For example, if accessing SQLServer you know that a SQLException is possible. If you know an object may return a null value and will cause an exception, you can handle it gracefully by writing a specific catch statement for a NullReferenceException.
For you information here’s a list of the predefined exception types provided by the .NET Runtime:
Exception type | Base type | Description | Example |
Exception |
Object |
Base class for all exceptions. |
None (use a derived class of this exception). |
SystemException |
Exception |
Base class for all runtime-generated errors. |
None (use a derived class of this exception). |
IndexOutOfRangeException |
SystemException |
Thrown by the runtime only when an array is indexed improperly. |
Indexing an array outside its valid range: arr[arr.Length+1] |
NullReferenceException |
SystemException |
Thrown by the runtime only when a null object is referenced. |
object o = null; o.ToString(); |
InvalidOperationException |
SystemException |
Thrown by methods when in an invalid state. |
Calling Enumerator.GetNext() after removing an Item from the underlying collection. |
ArgumentException |
SystemException |
Base class for all argument exceptions. |
None (use a derived class of this exception). |
ArgumentNullException |
ArgumentException |
Thrown by methods that do not allow an argument to be null. |
String s = null; "Calculate".IndexOf (s); |
ArgumentOutOfRangeException |
ArgumentException |
Thrown by methods that verify that arguments are in a given range. |
String s = "string"; s.Chars[9]; |
ExternalException |
SystemException |
Base class for exceptions that occur or are targeted at environments outside the runtime. |
None (use a derived class of this exception). |
ComException |
ExternalException |
Exception encapsulating COM HRESULT information. |
Used in COM interop. |
SEHException |
ExternalException |
Exception encapsulating Win32 structured exception handling information. |
Used in unmanaged code interop. |
Throwing Exceptions
As indicated earlier, not only can you react to raised exceptions, you can throw exceptions too when needed. For example, you may wish to re-throw an exception after catching it and not being able to recover from the exception. Your application-level error handling could then redirect to an appropriate error page.
You may further wish to throw your own custom exceptions in reaction to error conditions in your code.
The syntax is
Creating Custom Exceptions
As mentioned earlier, via the ApplicationException class you have the ability to create your own exception types.
Here’s an example VB class to do just that:
Imports System
Imports System.Text
Namespace CustomExceptions
Public Class customException1: Inherits ApplicationException
Public Sub New()
MyBase.New("<H4>Custom Exception</H4><BR>")
Dim strBuild As New StringBuilder()
strBuild.Append("<p COLOR='RED'>")
strBuild.Append("For more information ")
strBuild.Append("please visit: ")
strBuild.Append("<a href='http://www.cymru-web.net/exceptions'>")
strBuild.Append("Cymru-Web.net</a></p>")
MyBase.HelpLink = strBuild.ToString()
End Sub
End Class
End Namespace
Looking at this code. On line 6 you see that to create a custom exception we must inherit from the ApplicationException class. In the initialization code for the class we construct a new ApplicationException object using a string (one of the overloaded constructors of ApplicationException – see the .NET documentation for details of the others). We also set the HelpLink property string for the exception – this is a user friendly message for presentation to any client application.
The MyBase keyword behaves like an object variable referring to the base class of the current instance of a class (ApplicationException). MyBase is commonly used to access base class members that are overridden or shadowed in a derived class. In particular, MyBase.New is used to explicitly call a base class constructor from a derived class constructor.
Next a small test client, written in VB.NET using Visual Studio.Net so we have both web form and code behind files:
WebForm1.aspx:
<%@ Page Language="vb" AutoEventWireup="false" Codebehind="WebForm1.aspx.vb" Inherits="article_error_handling.WebForm1"%>
<html>
<body>
</body>
</html>
WebForm1.aspx.vb:
Imports article_error_handling.CustomExceptions
Public Class WebForm1
Inherits System.Web.UI.Page
Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
Try
Throw New customException1()
Catch ex As customException1
Response.Write(ex.Message & " " & ex.HelpLink)
End Try
End Sub
End Class
This (admittedly simple) example you can extend to your own requirements.
Page Level Error Handling
Two facets:
- Page redirection
- Using the Page objects error event to capture exceptions
Taking each in turn:
Page redirection
Unforeseen errors can be trapped with the ErrorPage property of the Page object. This allows definition of a redirection URL in case of unhandled exceptions. A second step is required to enable such error trapping however – setting of the customErrors attribute of your web.config file, as follows:
<configuration>
<system.web>
<customErrors mode="On">
</customErrors>
</system.web>
</configuration>
Then you’ll get your page level redirection rather than the page itself returning an error, so the following would work:
<%@ Page ErrorPage="http://www.cymru-web.net/GenericError.htm" %>
A useful option in addition to ‘on’ and ‘off’ for the mode attribute of customErrors is ‘RemoteOnly’ as when specified redirection will only occur if the browser application is running on a remote computer. This allows those with access to the local computer to continue to see the actual errors raised.
Page Objects Error Event
The Page object has an error event that is fired when an unhandled exception occurs in the page. It is not fired if the exception is handled in the code for the page. The relevant sub is Page_Error which you can use in your page as illustrated by the following code snippet:
Sub Page_Error(sender as Object, e as EventArgs)
dim PageException as string = Server.GetLastError().ToString()
dim strBuild as new StringBuilder()
strBuild.Append("Exception!")
strBuild.Append(PageException)
Response.Write(strBuild.ToString())
Context.ClearError()
End Sub
As with page redirection this allows you to handle unexpected errors in a more user-friendly fashion. We’ll return to explain some of the classes used in the snippet above when we look at a similar scenario at the application level in the next section.
Application Level Error Handling
In reality you are more likely to use application level error handling rather than the page level just introduced. The Application_Error event of the global.asax exists for this purpose. Unsurprisingly, this is fired when an exception occurs in the corresponding web application.
In the global.asx you code against the event as follows:
Sub Application_Error(sender as Object, e as EventArgs)
'Do Something
End Sub
Unfortunately the EventArgs in this instance are unlikely to be sufficiently informative but there are alternative avenues including that introduced in the code of the last section – the HttpServerUtility.GetLastError method which returns a reference to the last error thrown in the application. This can be used as follows:
Sub Application_Error(sender as Object, e as EventArgs)
dim LastException as string = Server.GetLastError().ToString()
Context.ClearError()
Response.Write(LastException)
End Sub
Note that the ClearError method of the Context class clears all exceptions from the current request – if you don’t clear it the normal exception processing and consequent presentation will still occur.
Alternatively there is the HttpContext’s class Error property which returns a reference to the first exception thrown for the current HTTP request/ response. An example:
Sub Application_Error(sender as Object, e as EventArgs)
dim LastException as string = Context.Error.ToString()
Context.ClearError()
Response.Redirect("CustomErrors.aspx?Err=" & Server.UrlEncode(LastException))
End Sub
This illustrates one method for handling application errors – redirection to a custom error page which can then customise the output to the user dependent on the actual error received.
Finally, we also have the ability to implement an application wide error page redirect, via the customErrors section of web.config already introduced, using the defaultRedirect property:
<configuration>
<system.web>
<customErrors mode="on" defaultRedirect="customerrors.aspx?err=Unspecified">
<error statusCode="404" redirect="customerrors.aspx?err=File+Not+Found"/>
</customErrors>
</system.web>
</configuration>
Note this also demonstrates customised redirection via the error attribute. The HttpStatusCode enumeration holds the possible values of statusCode. This is too long a list to present here - see the SDK documentation.
Conclusion
I hope this has provided a useful introduction to the error handling facilities of ASP.NET. I further hope you’ll now go away and produce better code that makes use of these facilities! In summary:
- Trap possible errors at design time via Option Explicit and Option Strict.
- Consider in detail the possible errors that could occur in your application.
- Use the Try … Catch … Finally construct to trap these errors at the page level and elsewhere.
- It is generally considered poor programming practice to use exceptions for anything except unusual but anticipated problems that are beyond your programmatic control (such as losing a network connection). Exceptions should not be used to handle programming bugs!
- Use application level exception handling (and perhaps also Page level) to trap foreseen and unforeseen errors in a user friendly way.
References
ASP.NET: Tips, Tutorial and Code
Scott Mitchell et al.
Sams
Programming Visual Basic .NET
Francesco Balena
Microsoft Press
.NET SDK Documentation
http://15seconds.com/issue/030102.htm
15 Seconds : Web Application Error Handling in ASP.NET
http://msdn.microsoft.com/msdnmag/issues/02/11/NETExceptions/default.aspx
.NET Exceptions: Make the Transition from Traditional Visual Basic Error Handling to the Object-Oriented Model in .NET
Note that this article was first published on 02/01/2003. The original article is available on DotNetJohn.
Introduction
The web is a stateless medium – state is not maintained between client requests by default. Technologies must be utilized to provide some form of state management if this is what is required of your application, which will be the case for all but the simplest of web applications. ASP.NET provides several mechanisms to manage state in a more powerful and easier to utilize way than classic ASP. It is these mechanisms that are the subject matter for this article.
Page Level State - ViewState
Page level state is information maintained when an element on the web form page causes a subsequent request to the server for the same page – referred to as ‘postback’. This is appropriately called ViewState as the data involved is usually, though not necessarily, shown to the user directly within the page output.
The Control.ViewState property provides a dictionary object for retaining values between such multiple requests for the same page. This is the method that the page uses to preserve page and control property values between round trips.
When the page is processed, the current state of the page and controls is hashed into a string and saved in the page as a hidden field. When the page is posted back to the server, the page parses the view state string at page initialization and restores property information in the page.
ViewState is enabled by default so if you view a web form page in your browser you will see a line similar to the following near the form definition in your rendered HTML:
<input type="hidden" name="__VIEWSTATE"
value="dDwxNDg5OTk5MzM7Oz7DblWpxMjE3ATl4Jx621QnCmJ2VQ==" />
When a page is re-loaded two methods pertaining to ViewState are called: LoadViewState and SaveViewState. Page level state is maintained automatically by ASP.NET but you can disable it, as necessary, by setting the EnableViewState property to false for either the controls whose state doesn’t need to be maintained or for the page as a whole. For the control:
<asp:TextBox id=”tbName” runat=”server” EnableViewState=”false” />
for the page:
<%@ Page EnableViewState=”false” %>
You can validate that these work as claimed by analyzing the information presented if you turn on tracing for a page containing the above elements. You will see that on postback, and assuming ViewState is enabled, that the LoadViewState method is executed after the Page class’ Init method has been completed. SaveViewState is called after PreRender and prior to actual page rendering.
You can also explicitly save information in the ViewState using the State Bag dictionary collection, accessed as follows:
ViewState(key) = value
Which can then be accessed as follows:
Value = ViewState(key)
It is important to remember that page level state is only maintained between consecutive accesses to the same page. When you visit another page the information will not be accessible via the methods above. For this we need to look at other methods and objects for storing state information.
Session Level State
What is a user session? Slightly simplifying it’s the interaction between a user’s first request for a page from a site until the user leaves the site again. Now what if you login into this site at your first request, assuming this is a site you have previously registered with. How does the application remember who you are for the rest of the ‘session’? Or if you have items in your shopping cart within an e-commerce web application how does the application ‘remember’ this information when you request to go to the checkout?
The answer may well be session state though the underlying mechanism by which this is achieved may be one of several options. ASP.NET creates a session (it reserves a section of memory) for a user when they first arrive at the site and assigns that user session a unique id that is tied to that section of memory. By default, ASP.NET then creates a cookie on the client that contains this same id. As this id will be sent with any subsequent http requests to this server ASP.NET will be able to match the user against the reserved section of memory. Further the application can store data related to that user session in this reserved memory space to maintain state between requests. It must be remembered that using session state is using server resources for every user of the site so you need to consider the resources required of items you choose to store.
An example:
session(“name”)=”Chris Sully”
sets a session variable with key ‘name’ and value “Chris Sully”. To retrieve/ display this we use:
lblUserName.text=session(“name”)
In this case assigning to the text property of a label web server control.
By default the resources associated with this session state maintenance are released if the user does not interact with the site for 20 minutes. If the user returns after a 20 minutes break a new session will have been created and any data associated with their previous session will have been lost. However, you can also destroy the session information for a user earlier within the application if so desired via
Session.Abandon
and you may also change the default value from 20 minutes. To redefine the session timeout property you use:
Session.Timeout = 5
Alternatively, and representing a more likely scenario, you would specify the value in your web.config file:
<configuration>
<system.web>
<sessionState timeout=10 />
</system.web>
</configuration>
which would half the default timeout from 20 to 10 minutes.
We'll return to some of the other sessionState properties in subsequent sections.
Looking at a little more detail at the session initialization process:
- User makes a request of the server
- ASP.NET retrieves the user’s sessionID via the cookie value passed in the HTTP request from the client computer. If one does not exist ASP.NET creates one, and raises the Session_OnStart event (which can be reacted to in the global.asax, amongst other locations).
- ASP.NET retrieves any data relating to the sessionID from a session data store. This data store may be of a variety of types, as we shall explore in subsequent sections.
- A System.Web.SessionState.SessionState object is created and populated with the data from the previous step. This is the object you access when using the shortcut session(“name”)=”Chris Sully”
There is also a Session_OnEnd which you can code against in your global.asax.
SQLServer
We can use SQLServer to store our session state. We can use other databases if we want to but ASP.NET includes built in support for SQLServer as well as other data stores that make life easier for developers. As you might expect, SQLServer should be the choice for storage of session information in high end, performance critical web applications.
To enable session state storage with SQLServer we'll need the infrastructure to support this, i.e. the tables and stored procedures. Microsoft has supplied these and they are located at
C:\winnt\Microsoft.net/framework/[version directory]
On my system:
C:\winnt\Microsoft.net/framework/v1.0.3705/InstallSQLState.sql
And you also have the TSQL to remove the setup: UninstallSQLState.sql.
So, to enable SQLServer session support open up and execute InstallSQLState.sql in query analyzer. If you then investigate what’s new in your SQLServer setup you will see a new database named AspState with 15 or so stored procedures used to insert and retrieve the associated session data. You won’t see any new tables! However, if you expand the tempdb database and view the tables therein, you will see two new tables: ASPStateTempApplications and ASPStateTempSessions which is where our session state information shall be held.
Now, all we need to do, as ASP.NET takes care of everything else for us, is modify web.config so that the ASP.Net application knows it should be using SQLServer for session state management:
<configuration>
<system.web>
<sessionState mode=”sqlserver” sqlConnectionString=”connectionString” />
</system.web>
</configuration>
where you should replace connectionString with that for you own machine.
If you want to test this, create a simple page which adds an item to the session state. This can be as simple as assigning a value to a session variable a la: session(“name”)=”Chris Sully”, perhaps simply placing this in the OnLoad sub of an otherwise blank aspx. View this in your browser. Don’t close the browser window after the page is loaded as you’ll end the session. Remember that after the first request by default the session will last 20 minutes.
If you now examine the contents of the ASPStateTempApplications table in tempdb either with Enterprise Manager or Query Analyser, you will see an entry corresponding to the above set session variable.
The other possibly important consideration is that the session data is stored external to the ASP.Net process. Hence, even if we restart the web server the information will remain available to subsequent user requests. We’ll look at another mechanism for such data isolation shortly.
Cookies
We’ve already introduced that cookies are central to the concept of session – they are the client half of how session state is maintained over browser requests. As well as using them for user identification purposes we can also use them to store custom session information. Cookie functionality is exposed via the HttpCookie class in .NET. This functionality may also be accessed via the Response and Request ASP.NET classes.
Cookies used in this way don’t tie in strongly with the inbuilt state management capabilities of .NET but they can provide useful custom session state management functionality.
To add a cookie (store on the client browser machine) you use the response object, e.g.:
response.cookies(“ExampleCookie”)(“Time”)= datetime.now.tostring(“F”)
i.e.,
response.cookies(“CookieName”)(“keyName”) = value
‘F’ refers to the Full date/time pattern (long time) by the way.
To retrieve a cookie you use the request object. The cookie is associated with the server URL and hence will be automatically appended to the request parameter collection and hence accessible as follows:
TimeCookieSet = Request.Cookies(“ExampleCookie”)(“Time”)
There are other options available. For example, you may set the Expires property of the cookie. This may be a fixed date or a length of time from the present date:
Response.cookies(“ExampleCookie”).Expires = DateTime.FromString(“12/12/2003”)
Response.cookies(“ExampleCookie”).Expires = DateTime.Now.AddMonths(6)
Cookie Munging
You may wish to configure your applications to use session state without relying on cookies. There could be several reasons for this:
- You need to support old browser types that do not support cookies.
- You wish to cater for people who have chosen to disable cookie support within their browser.
- Certain types of domain name redirection mean that cookies / conventional state management will not work.
Cookie munging causes information regarding the session state id to be added to URL information. Thus the link between client and session data on the server is maintained.
It is simply enabled, as follows:
<configuration>
<system.web>
<sessionState cookieless=”true” />
</system.web>
</configuration>
If you change your web.config file to the above and then view a page which uses both the session object and postback you’ll see ASP.Net has inserted some extra information in the URL of the page. This extra information represents the session ID.
Cookie munging certainly does the job in instances where it is required but should be avoided where it is not, as it is insecure being susceptible to manual manipulation of the URL string.
Session State Server
Session State Server runs independently of ASP.Net, the current application, web server and (possibly) the server meaning you have good isolation of data and hence if there is a problem with the web server you may still be able to recover the user session data. This without the need for SQLServer. The State Server also presents a number of extra facilities including the choice of where and how to deploy the facility.
You can run the StateServer in the same process as the ASP.Net application (“InProc” – the default). This is similar to how classic ASP managed session state. Why would we want to do this when we have just lauded the benefits of data isolation? Performance is the answer – data stored in the same process is accessed quickly.
You can test this via restarting IIS (iisreset) when you know you have some session data in your application. You could also try restarting the ASP.NET application via the MMC snap-in, the effect is the same. This is achieved by removing and recreating the IIS application (right click-properties on the application sub-directory/ virtual directory).
A couple of simple test scripts would be:
1:
<configuration>
<system.web>
<sessionState cookieless=”true” />
</system.web>
</configuration><%@ Page Language="VB" %>
<html>
<head>
</head>
<body>
<form runat="server" ID="Form1">
<% session("test")="test" %>
</form>
</body>
</html>
2:
<%@ Page Language="VB" %>
<html>
<head>
</head>
<body>
<form runat="server" ID="Form2">
<%=session("test")%>
</form>
</body>
</html>
So, if you run 1, then 2 directly after without closing the browser window, the value will be maintained and you’ll see ‘test’ displayed. If you reset IIS or the IIS application in between the browser requests session information will be lost.
You can also run State Server out of process (“Out-of-Proc”) which means that session state information can be stored in alternative locations to the ASP.NET process space. Hence, if there is a problem with the application or web server state is maintained. In this scenario, the ASPNETState process (aspnet_state.exe) is used, which runs as a service, storing session data in memory.
First thing we need to do therefore is make sure this service is up and running. This can be done via the command line:
Net start aspnet_state
Though if you’re going to this out of process support for an application you will want to setup the service to start on machine boot up (admin tools – services).
Next it’s back to that Web.Config file to tell the application to use the facility:
<configuration>
<system.web>
<sessionState mode=”stateserver” stateConnectionString=”tcpip=127.0.0.1:80” />
</system.web>
</configuration>
In actual fact the stateConnectionString attribute is not required when pointing to a port on the local machine, as it is above (which supplies the default setting), but is important if you wish to use another machine and port for extra security/ reliability. It is included to demonstrate the syntax.
If you now go back and try the session maintenance test you won’t lose that session data.
Application level state
Using Application state is similar to using session state: the HttpApplicationState class contains an Application property that provides dictionary access to variables:
Application(“key”) = value
The difference with session state is that data is stored per IIS application as opposed to per user session. Thus if you set up:
application(“name”)=”Chris Sully”
in your global.asax file for the application this value is available to all pages of your application and will be the same value for all users who visit.
Setting state information is not limited to global.asax – any page of the application being run by any user can do this. There is an implication here – multiple users trying to change the value of an application variable could lead to data inconsistencies. To prevent this the HttpApplicationState class provides the lock method that ensures only one user (actually a process thread) is accessing the application state object at any one time. Thus you should lock before amending an application value and unlock immediately afterwards so others have access to the variable.
Other options for state management: caching
Another method of storing data on the server is via the cache class. This effectively extends the capabilities of storing data at the application level. Frequently used data items may be stored in the cache memory for presentation. For example, data for a drop down list may be stored in a cached object so that the cached copy is used rather than obtaining the data from the database on every occasion.
You may store your objects in cache memory and set the properties of the cache to control when the cache might release its resources. For example, you might use the TimeSpan property that specifies how long the item should remain in the cache after it is last accessed. You can also specify a dependency of the cached item on a datasource, for example an XML file. If this file is changed the chance can be programmed to react to this event accordingly and update the cached object.
For further information on this subject see my article on caching on dotnetjohn.
Conclusion
I hope this article has provided a useful overview of the state management options and support in .NET. With all the forms of state management there exists a balance between the needs of the application and the associated resources used.
In particular, be aware that page level state is enabled by default and should be disabled if not required, particularly if you are manipulating significant amounts of data within DataBound controls. As well as using server resources, leaving ViewState enabled in such a situation will increased the page size, and hence download times, significantly. In such a situation it may be better to cache the data on the server for the postback rather than transmit the data in the ViewState, or even rely on the data caching facilities of your chosen DBMS.
ASP.Net provides significantly extended support for session state maintenance via SQLServer and Session State Server. The choice is down to the needs of your application and, in particular, how important data isolation from your IIS application is.
References
.NET Framework SDK documentation
ASP.NET: Tips, Tutorials and Code Sams Mitchell et al.
Note that this article was first published on 02/01/2003. The original article is available on DotNetJohn, where the code is also available for download.
Introduction
This article considers and develops a reasonably secure login facility for use within an Internet application utilizing the inbuilt features of ASP.Net. This login facility is intended to protect an administrative section of an Internet site where there are only a limited number of users who will have access to that section of the site. The rest of the site will be accessible to unauthorized users. This problem specification will guide our decision-making.
Also presented are suggestions as to how this security could be improved if you cross the boundary of ASP.Net functionality into supporting technologies. Firstly, however I'll provide an overview of web application security and the features available in ASP.Net, focusing particularly on forms based authentication, as this is the approach we shall eventually use as the basis for our login facility.
Pre-requisites for this article include some prior knowledge of ASP.Net (web.config, security, etc.) and related technologies (e.g. IIS) as well as a basic understanding of general web and security related concepts, e.g. HTTP, cookies.
Web application security: authentication and authorization
Different web sites require different levels of security. Some portions of a web site commonly require password-protected areas and there are many ways to implement such security, the choice largely dependent on the problem domain and the specific application requirements.
Security for web applications is comprised of two processes: authentication and authorization. The process of identifying your user and authenticating that they are who they claim they are is authentication. Authorization is the process of determining whether the authenticated user has access to the resource they are attempting to access.
The authentication process requires validation against an appropriate data store, commonly called an authority, for example an instance of Active Directory.
ASP.Net provides authorization services using both the URL and the file of the requested resource. Both checks must be successful for the user to be allowed to proceed to access said resource.
Authentication via ASP.Net
ASP.Net arrives complete with the following authentication providers that provide interfaces to other levels of security existing within and/ or external to the web server computer system:
- integrated windows authentication using NTLM or Kerberos.
- forms based authentication
- passport authentication
As with other configuration requirements web.config is utilized to define security settings such as:
- the authentication method to use
- the users who are permitted to use the application
- how sensitive data should be encrypted
Looking at each authentication method in turn with a view to their use in our login facility:
Integrated Windows
This is a secure method but it is only supported by Internet Explorer and therefore most suited to intranet situations where browser type can be controlled. In fact it is the method of choice for Intranet applications. Typically it involves authentication against a Windows domain authority such as Active Directory or the Security Accounts Manager (SAM) using Windows NT Challenge/ Response (NLTM).
Integrated Windows authentication uses the domain, username and computer name of the client user to generate a ‘challenge’. The client must enter the correct password which will causes the correct response to be generated and returned to the server.
In order for integrated Windows authentication to be used successfully in ASP.Net the application needs to be properly configured to do so via IIS – you will commonly want to remove anonymous access so users are not automatically authenticated via the machines IUSR account. You should also configure the directory where the protected resource is located as an application, though this may already be the case if this is the root directory of your web application.
Consideration of suitability
As integrated Windows authentication is specific to Internet Explorer it is not a suitable authentication method for use with our login facility that we have specified we wish to use for Internet applications. In such a scenario a variety of browser types and versions may provide the client for our application and we would not wish to exclude a significant percentage of our possible user population from visiting our site.
Forms based authentication
This is cookie-based authentication by another name and with a nice wrapper of functionality around it. Such authentication is commonly deemed sufficient for large, public Internet sites. Forms authentication works by redirecting unauthenticated requests to a login page (typically username and a password are collected) via which the credentials of the user are collected and validated. If validated a cookie is issued which contains information subsequently used by ASP.Net to identify the user. The longevity of the cookie may be controlled: for example you may specify that the cookie is valid only for the duration of the current user session.
Forms authentication is flexible in the authorities against which it can validate. . For example, it can validate credentials against a Windows based authority, as per integrated Windows, or other data sources such as a database or a simple text file. A further advantage over integrated Windows is that you have control over the login screen used to authenticate users.
Forms authentication is enabled in the applications web.config file, for example:
<configuration>
<system.web>
<authentication mode="Forms">
<forms name=".AUTHCOOKIE" loginURL="login.aspx" protection="All" />
</authentication>
<machineKey validationKey="Autogenerate" decryption key="Autogenerate" validation"SHA1" />
<authorization>
<deny users="?" />
<authorization>
</system.web>
</configuration>
This is mostly self-explanatory. The name element refers to the name of the cookie. The machineKey section controls the decryption that is used. In a web farm scenario with multiple web servers the key would be hard-coded to enable authentication to work. Otherwise different machines would be using different validation keys! The ‘?’ in the authorization section above by the way represents the anonymous user. An ‘*’ would indicate all users.
Within the login page you could validate against a variety of data sources. This might be an XML file of users and passwords. This is an insecure solution however so should not be used for sensitive data though you could increase security by encrypting the passwords.
Alternatively you can use the credentials element of the web.config file, which is a sub-element of the <forms> element, as follows:
<credentials passwordFormat=”Clear”>
<user name=”Chris” password=”Moniker” />
<user name=”Maria” password=”Petersburg” />
</credentials>
Using this method means there is very little coding for the developer to undertake due to the support provided by the .NET Framework, as we shall see a little later when we revisit this method.
Note also the passwordFormat attribute is required, and can be one of the following values:
Clear
Passwords are stored in clear text. The user password is compared directly to this value without further transformation.
MD5
Passwords are stored using a Message Digest 5 (MD5) hash digest. When credentials are validated, the user password is hashed using the MD5 algorithm and compared for equality with this value. The clear-text password is never stored or compared when using this value. This algorithm produces better performance than SHA1.
SHA1
Passwords are stored using the SHA1 hash digest. When credentials are validated, the user password is hashed using the SHA1 algorithm and compared for equality with this value. The clear-text password is never stored or compared when using this value. Use this algorithm for best security.
What is hashing? Hash algorithms map binary values of an arbitrary length to small binary values of a fixed length, known as hash values. A hash value is a unique and extremely compact numerical representation of a piece of data. The hash size for the SHA1 algorithm is 160 bits. SHA1 is more secure than the alternate MD5 algorithm, at the expense of performance.
At this time there is no ASP.Net tool for creating hashed passwords for insertion into configuration files. However, there are classes and methods that make it easy for you to create them programmatically, in particular the FormsAuthentication class. It’s HashPasswordForStoringInConfigFile method can do the hashing. At a lower level, you can use the System.Security.Cryptography classes, as well. We'll be looking at the former method later in this article.
The flexibility of the authentication provider for Forms Authentication continues as we can select SQLServer as our data source though the developer needs then to write bespoke code for validating user credentials against the database. Typically you will then have a registration page to allow users to register their login details which will then be stored in SQLServer for use when the user then returns to a protected resource and is redirected to the login page by the forms authentication, assuming the corresponding cookie is not still in existence.
This raises a further feature - we would want to give all users access to the registration page so that they may register but other resources should be protected. Additionally, there may be a third level of security, for example an admin page to list all users registered with the system. In such a situation we can have multiple system.web sections in our web.config file to support the different levels of authorization, as follows:
<configuration>
<system.web>
<authentication mode="Forms">
<forms name=".AUTHCOOKIE" loginURL="login.aspx" protection="All" />
</authentication>
<machineKey validationKey="Autogenerate" decryption key="Autogenerate" validation"SHA1" />
<authorization>
<deny users="?" />
<authorization>
</system.web>
<location path="register.aspx">
<system.web>
<authorization>
<allow users="*,?" />
</authorization>
</system.web>
</location>
<location path="admin.aspx">
<system.web>
<authorization>
<allow users="admin " />
<deny users="*" />
</authorization>
</system.web>
</location>
</configuration>
Thus only the admin user can access admin.aspx, whilst all users can access register.aspx so if they don't have an account already they can register for one. Any other resource request will cause redirection to login.aspx, if a valid authentication cookie by the name of .AUTHCOOKIE isn't detected within the request. On the login page you would provide a link to register.aspx for users who require the facility.
Alternatively you can have multiple web.config files, with that for a sub-directory overriding that for the application a whole, an approach that we shall implement later for completeness.
Finally, you may also perform forms authentication in ASP.Net against a Web Service, which we won’t consider any further as this could form an article in itself, and against Microsoft Passport. Passport uses standard web technologies such as SSL, cookies and Javascript and uses strong symmetric key encryption using Triple DES (3DES) to deliver a single sign in service where a user can register once and then has access to any passport enabled site.
Consideration of suitability
Forms based authentication is a flexible mechanism supporting a variety of techniques of various levels of security. Some of the available techniques may be secure enough for implementation if extended appropriately. Some of the techniques are more suited to our problem domain than others, as we’ll discuss shortly.
In terms of specific authorities:
Passport is most appropriately utilized where your site will be used in conjunction with other Passport enabled sites and where you do not wish to maintain your own user credentials data source. This is not the case in our chosen problem domain where Passport would both be overkill and inappropriate.
SQLServer would be the correct solution for the most common web site scenario where you have many users visiting a site where the majority of content is protected. Then an automated registration facility is the obvious solution with a configuration as per the web.config file just introduced. In our chosen problem domain we have stated that we potentially have only a handful of users accounts accessing a small portion of the application functionality and hence SQLServer is not necessarily the best solution, though is perfectly viable.
Use of the credentials section of the forms element of web.config or a simple text/ XML file would seem most suitable for this problem domain. The extra security and simplicity of implementation offered by the former makes this the method of choice.
Authorization via ASP.Net
As discussed earlier this is the second stage of gaining access to a site: determining whether an authenticated user should be permitted access to a requested resource.
File authorization utilizes windows security services access control lists (ACLs) – using the authorized identity to do so. Further, ASP.Net allows further refinement based on the URL requested, as you may have recognized in the examples already introduced, as well as the HTTP request method attempted via the verb attribute, valid values of which are: GET, POST, HEAD or DEBUG. I can't think of many occasions in which you'd want to use this feature but you may have other ideas! You may also refer to windows roles as well as named users.
A few examples to clarify:
<authorization>
<allow users=”Chris” />
<deny users=”Chris” />
<deny users=”*” />
</authorization>
You might logically think this would deny all users access. In fact Chris still has access, as when ASP.Net finds a conflict such as this it will use the earlier declaration.
<authorization>
<allow roles=”Administrators” />
<deny users=”*” />
</authorization>
<authorization>
<allow verbs=”GET, POST” />
</authorization>
Impersonation
Impersonation is the concept whereby an application executes under the context of the identity of the client that is accessing the application. This is achieved by using the access token provided by IIS. You may well know that by default the ASPNET account is used to access ASP.Net resources via the Aspnet_wp.exe process. This, by necessity, has a little more power than the standard guest account for Internet access, IUSR, but not much more. Sometimes you may wish to use a more powerful account to access system resources that your application needs. This may be achieved via impersonation as follows:
<system.web>
<identity impersonate=”true” />
</system.web>
or you may specify a particular account:
<system.web>
<identity impersonate=”false” userName=”domain\sullyc” password=”password” />
</system.web>
Of course you will need to provide the involved accounts with the necessary access rights to achieve the goals of the application. Note also that if you don’t remove IUSR from the ACLs then this is the account that will be used – this is unlikely to meet your needs as this is a less powerful account than ASPNET.
ASP.Net will only impersonate during the request handler - tasks such as executing the compiler and reading configuration data occur as the default process account. This is configurable via the <processModel> section of your system configuration file (machine.config). Care should be taken however not to use an inappropriate (too powerful) account which exposes your system to the threat of attacks.
The situation is further complicated by extra features available in IIS6 … but we’ll leave those for another article perhaps as the situation is complex enough!
Let’s move onto developing a login solution for our chosen problem domain.
Our Chosen Authentication Method – how secure is it?
We've chosen forms based authentication utilizing the web.config file as our authority. How secure is the mechanism involved? Let's consider this by examining the process in a little more detail. As a reminder, our application scenario is one of a web site where we've put content which we want to enable restricted access to in a sub-directory named secure. We have configured our web.config files to restrict access to the secure sub-directory, as described above. We deny access to the anonymous users (i.e. unauthenticated users) to the secure sub-directory:
<authorization>
<deny users="?" />
</authorization>
If someone requests a file in the secure sub-directory then ASP.Net URL authentication kicks in - ASP.Net checks to see if a valid authentication cookie is attached to the request. If the cookie exists, ASP.Net decrypts it, validates it to ensure it hasn't been tampered with, and extracts identity information that it assigns to the current request. Encryption and validation can be turned off but are enabled by default. If the cookie doesn't exist, ASP.Net redirects the request to the login page. If the login is successful, the authentication cookie is created and passed to the user’s browser. This can be configured to be a permanent cookie or a session-based cookie. Possibly slightly more secure is a session-based cookie where the cookie is destroyed when the user leaves the application or the session times out. This prevents someone else accessing the application from the user’s client machine without having to login.
Given the above scenario we have two security issues for further consideration:
- How secure is the cookie based access? Note above that encryption and validation are used by default. How secure are these in reality?
Validation works exactly the same for authentication cookies as it does for view state: the <machineKey> element's validationKey is appended to the cookie, the resulting value is hashed, and the hash is appended to the cookie. When the cookie is returned in a request, ASP.Net verifies that it wasn't tampered with by rehashing the cookie and comparing the new hash to the one accompanying the cookie. Encryption works by encrypting the cookie, hash value and all with <machineKey>'s decryptionKey attribute. Validation consumes less CPU time than encryption and prevents tampering. It does not, however, prevent someone from intercepting an authentication cookie and reading its contents.
Encrypted cookies can't be read or altered, but they can be stolen and used illicitly. Time-outs are the only protection a cookie offers against replay attacks, and they apply to session cookies only. The most reliable way to prevent someone from spoofing your site with a stolen authentication cookie is to use an encrypted communications link (HTTPS). Talking of which, this is one situation when you might want to turn off both encryption and validation. There is little point encrypting the communication again if you are already using HTTPS.
Whilst on the subject of cookies, remember also that cookie support can be turned off via the client browser. This should also be borne in mind when designing your application.
- How secure is the logging on procedure to a web form? Does it use clear text username and password transmission that could be susceptible to observation, capture and subsequent misuse?
Yes is the answer. Thus if you want a secure solution but don't want the overhead of encrypting communications to all parts of your site, consider at least submitting user names and passwords over HTTPS, this assuming your web hosting service provides this.
To reiterate, the forms security model allows us to configure keys to use for encryption and decryption of forms authentication cookie data. Here we have a problem - this only encrypts the cookie data - the initial login screen data, i.e. email / password is not encrypted. We are using standard HTTP transmitting data in clear text which is susceptible to interception. The only way around this is to go to HTTPS and a secure communication channel.
Which perhaps begs the question – what is the point of encrypting the cookie data if our access is susceptible anyway if we are using an unsecured communication channel? Well, if we enable cookie authentication when we first login then subsequent interaction with the server will be more secure. After that initial login a malicious attacker could not easily gain our login details and gain access to the site simply by examining the contents of the packets of information passed to and from the web server. However, note the earlier comments on cookie theft. It is important to understand these concepts and the impact our decisions have on the overall security of our application data.
It is perhaps unsurprising given the above that for the most secure applications:
- A secure HTTPS channel is used whenever dealing with username/ password/ related data.
- Cookies are not exclusively relied upon: often though recall of certain information is cookie based important transactions still require authorization via an encrypted password or number.
It is up to the application architect/ programmer to decide whether this level of security is appropriate to their system.
Finally, before we actually come up with some code remember that forms based security secures only ASP.Net resources. It doesn’t protect HTML files, for example. Just because you have secured a directory using web.config / ASP.Net doesn’t mean you have secured all files in that directory. To do this you could look at features available via IIS.
The 'Application'
Finally to the code and making our ASP.Net application as secure as possible using the facilities ASP.Net provides. Taking the above described scenario where we have a secure sub-directory the files within which we wish to protect. However, we anticipate there will only be a handful of users who will need access to the directory and hence this is a suitable problem domain to be addressed with a web.config based authority solution as earlier decided.
Starting with our web.config file. We can secure the sub-directory either via the location element, as described above, but just to demonstrate the alternative double web.config based approach, here is the web.config at the root level:
<configuration>
<system.web>
<authentication mode="Forms">
<forms name=".AUTHCOOKIE" loginUrl="login_credentials.aspx" protection="All">
<credentials passwordFormat="Clear">
<user name="chris" password="password" />
</credentials>
</forms>
</authentication>
<machineKey validationKey="AutoGenerate" decryptionKey="AutoGenerate" validation="SHA1" />
<authorization>
<allow users="*" />
</authorization>
</system.web>
</configuration>
You can see that this sets up forms based security enabling validation and encryption and specifies a credentials list of one user, currently in Cleartext format but shortly we'll see how to encrypt the password via SHA1. You'll also see that this file doesn’t actually restrict user access at all so URL based authentication will not be used at the root level of our application. However, if we extend the configuration for the secure sub-directory via an additional web.config file:
<configuration>
<system.web>
<authorization>
<deny users="?" />
</authorization>
</system.web>
</configuration>
Then if a user attempts to access an ASP.Net resource in secure they will be dealt with according to the combination of directives in the web.config file and inherited from the parent web.config file, and machine.config file for that matter.
Onto the login file: you will need form fields to allow entry of username and password data. Note that security will be further improved by enforcing minimum standards on passwords (e.g. length), which can be achieved by validation controls. There is only minimal validation in the example. Note that there is no facility to request a ‘persistent cookie’ as this provides a minor security risk. It is up to you to decide whether a permanent cookie is acceptable in your application domain.
Then in the login file, login_credentials.aspx, after allowing the user to enter username and password data, in the sub executed on the server when the submit form button is clicked we validate the entered data against the web.config credentials data, achieved simply as follows:
If FormsAuthentication.Authenticate(Username.Value, UserPass.Value) Then
FormsAuthentication.RedirectFromLoginPage (UserName.Value, false)
Else
Msg.text="credentials not valid"
End If
Could it be any simpler? The FormsAuthentication object knows what authority it needs to validate against as this has been specified in the web.config file. If the user details match, the code proceeds to redirect back to the secured resource and also sets the cookie for the user session based on the user name entered. The parameter 'false' indicates that the cookie should not be permanently stored on the client machine. Its lifetime will be the duration of the user session by default. This can be altered if so desired.
Back to web.config to improve the security. The details are being stored unencrypted – we can encrypt them with the aforementioned HashPasswordForStoringInConfigFile of the FormsAuthentication class, achieved simply as follows:
Private Function encode(ByVal cleartext As String) As String
encode = FormsAuthentication.HashPasswordForStoringInConfigFile(cleartext, "SHA1")
Return encode
End Function
This is the key function of the encode.aspx file provided with the code download, which accepts a text string (the original password – ‘password’ in this case) and outputs a SHA1 encoded version care of the above function.
Thus, our new improved configuration section of our root web.config file becomes:
<credentials passwordFormat="SHA1">
<user name="chris" password="5BAA61E4C9B93F3F0682250B6CF8331B7EE68FD8" />
</credentials>
To summarize the involved files:
Root/web.config |
root web.config file |
Root/webform1.aspx |
test page |
Root/login_credentials.aspx |
login page |
|
Root/encode.aspx |
form to SHA1 encode a password for <credentials> |
Root/secure/web.config |
directives to override security for this sub-directory to deny anonymous access |
Root/secure/webform1.aspx |
test page |
Conclusions
We’ve looked at the new security features of ASP.Net focusing particularly on an application scenario where forms based authentication uses the credentials section of web.config, but presenting this in the context of wider security issues.
In summary you should consider forms based authentication when:
- User names and passwords are stored somewhere other than Windows Accounts (it is possible to use forms authentication with Windows Accounts but in this case Integrated Windows authentication may well be the best choice).
- You are deploying your application over the Internet and hence you need to support all browsers and client operating systems.
- You want to provide your own user interface form as a logon page.
You should not consider forms based authentication when:
- You are deploying an application on a corporate intranet and can take advantage of the more secure Integrated Windows authentication.
- You are unable to perform programmatic access to verify the user name and password.
Further security considerations for forms based authentication:
- If users are submitting passwords via the logon page, you can (should?) secure the channel using SSL to prevent passwords from being easily obtained by hackers.
- If you are using cookies to maintain the identity of the user between requests, you should be aware of the potential security risk of a hacker "stealing" the user's cookie using a network-monitoring program. To ensure the site is completely secure when using cookies you must use SSL for all communications with the site. This will be an impractical restriction for most sites due to the significant performance overhead. A compromise available within ASP.Net is to have the server regenerate cookies at timed intervals. This policy of cookie expiration is designed to prevent another user from accessing the site with a stolen cookie.
Finally, different authorities are appropriate for form-based authentication for different problem domains. For our considered scenario where the number of users was limited as we were only protecting a specific administrative resource credentials / XML file based authorities are adequate. For a scenario where all site information is ‘protected’ a database authority is most likely to be the optimal solution.
References
ASP.Net: Tips, Tutorial and Code
Scott Mitchell et al.
Sams
.Net SDK documentation
Various online articles, in particular:
ASP.Net Security: An Introductory Guide to Building and Deploying More Secure Sites with ASP.Net and IIS -- MSDN Magazine, April 2002
http://msdn.microsoft.com/msdnmag/issues/02/04/ASPSec/default.aspx
An excellent and detailed introduction to IIS and ASP.Net security issues.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/authaspdotnet.asp
Authentication in ASP.Net: .Net Security Guidance
You may download the code here.
Note that this article was first published on 02/01/2003. The original article is available on DotNetJohn, where the code is also available for download and execution.
Introduction
In this article we’re going to take a look at the features available to the ASP.NET programmer that enable performance improvement via the use of caching. Caching is the keeping of frequently used data in memory for ready access by your ASP.NET application. As such caching is a resource trade-off between those needed to obtain the data and those needed to store the data. You should be aware that there is this trade-off - there is little point caching data that is going to be requested infrequently as this is simply wasteful of memory and may have a negative impact on your system performance. However, on the other hand, if there is data that is required every time a user visits the home page of your application and this data is only going to change once a day, then there are big resource savings to be made by storing this data in memory rather than retrieving this data every time a user hits that homepage. This even considering that it is likely that the DBMS will also be doing it’s own caching. Typically you will want to try and minimise requests to your data store as, again typically, these will be the most resource hungry operations associated with your application.
In ASP.NET there are two areas where caching techniques arise:
- Caching of rendered pages, page fragments or WebService output: termed ‘output caching’. Output caching can be implemented either declaratively or programmatically.
- Caching of data / data objects programmatically via the cache class.
We'll return to the Cache class later in the article, but let’s focus on Output Caching to start with and Page Output Caching in particular.
You can either declaratively use the Output Caching support available to web forms/pages, page fragments and WebServices as part of their implementation or you can cache programmatically using the HttpCachePolicy class exposed through the HttpResponse.Cache property available within the .NET Framework. I'll not look at WebServices options in any detail here, only mentioning that the WebMethod attribute that is assigned to methods to enable them as Webservices has a CacheDuration attribute which the programmer may specify.
Page Output Caching
Let’s consider a base example and then examine in a little detail the additional parameters available to us programmers. To minimally enable caching for a web forms page (or user controls) you can use either of the following:
1. Declarative specification via the @OutputCache directive e.g.:
<%@ OutputCache Duration="120" VaryByParam="none" %>
2. Programmatic specification via the Cache property of the HttpResponse class, e.g.:
Response.Cache.SetExpires(datetime,now,addminutes(2))
Response.Cache.SetCacheability(HttpCacheability.Public)
These are equivalent and will cache the page for 2 minutes. What does this mean exactly? When the document is initially requested the page is cached. Until the specified expiration all page requests for that page will be served from the cache. On cache expiration the page is removed from the cache. On the next request the page will be recompiled, and again cached.
In fact @OutputCache is a higher-level wrapper around the HttpCachePolicy class exposed via the HttpResponse class so rather than just being equivalent they ultimately resolve to exactly the same IL code.
Looking at the declarative example and explaining the VaryByParam="none". HTTP supports two methods of maintaining state between pages: POST and GET. Get requests are characterised by the use of the query string to pass parameters, e.g. default.aspx?id=1&name=chris, whereas post indicates that the parameters were passed in the body of the HTTP request. In the example above caching for such examples based on parameters is disabled. To enable, you would set VaryByParam to be ‘name’, for example – or any parameters on which basis you wish to cache. This would cause the creation of different cache entries for different parameter values. For example, the output of default.aspx?id=2&name=maria would also be cached. Note that the VaryByParam attribute is mandatory.
Returning to the programmatic example and considering when you would choose this second method over the first. Firstly, as it’s programmatic, you would use this option when the cache settings needed to be set dynamically. Secondly, you have more flexibility in option setting with HttpCachePolicy as exposed by the HttpResponse.cache property.
You may be wondering exactly what
Response.Cache.SetCacheability(HttpCacheability.Public)
achieves. This sets the cache control HTTP header - here to public - to specify that the response is cacheable by clients and shared (proxy) caches - basically everybody can cache it. The other options are nocache, private and server.
We’ll return to Response.Cache after looking at the directive option in more detail.
The @OutputCache Directive
First an example based on what we've seen thus far: output caching based on querystring parameters:
Note this example requires connectivity to a standard SQLServer installation, in particular the Northwind sample database. You maye need to change the string constant strConn to an appropriate connection string for your system for the sample code presented in this article to work. If you have no easy access to SQLServer, you could load some data in from an XML file or simply pre-populate a datalist (for example) and bind the datagrid to this datastructure.
output_caching_directive_example.aspx
<%@ OutputCache Duration="30" VaryByParam="number" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<html>
<head></head>
<body>
<a href="output_caching_directive_example.aspx?number=1">1</a>-
<a href="output_caching_directive_example.aspx?number=2">2</a>-
<a href="output_caching_directive_example.aspx?number=3">3</a>-
<a href="output_caching_directive_example.aspx?number=4">4</a>-
<a href="output_caching_directive_example.aspx?number=5">5</a>-
<a href="output_caching_directive_example.aspx?number=6">6</a>-
<a href="output_caching_directive_example.aspx?number=7">7</a>-
<a href="output_caching_directive_example.aspx?number=8">8</a>-
<a href="output_caching_directive_example.aspx?number=9">9</a>
<p>
<asp:Label id="lblTimestamp" runat="server" maintainstate="false" />
<p>
<asp:DataGrid id="dgProducts" runat="server" maintainstate="false" />
</body>
</html>
<script language="vb" runat="server">
const strConn = "server=localhost;uid=sa;pwd=;database=Northwind"
Sub Page_Load(sender as Object, e As EventArgs)
If Not Request.QueryString("number") = Nothing Then
lblTimestamp.Text = DateTime.Now.TimeOfDay.ToString()
dim SqlConn as new SqlConnection(strConn)
dim SqlCmd as new SqlCommand("SELECT TOP " _
& Request.QueryString("number") & _
" * FROM Products", SqlConn)
SqlConn.Open()
dgProducts.DataSource = SqlCmd.ExecuteReader(CommandBehavior.CloseConnection)
Page.DataBind()
End If
End Sub
</script>
Thus, if you click through some of the links to the parameterised pages and then return to them you will see the timestamp remains the same for each parameter setting until the 30 seconds has elapsed when the data is loaded again. Further caching is performed per parameter file, as indicated by the different timestamps.
The full specification of the OutputCache directive is:
<%@ OutputCache Duration="#ofseconds"
Location="Any | Client | Downstream | Server | None"
VaryByControl="controlname"
VaryByCustom="browser | customstring"
VaryByHeader="headers"
VaryByParam="parametername" %>
Examining these attributes in turn:
Duration
This is the time, in seconds, that the page or user control is cached. Setting this attribute on a page or user control establishes an expiration policy for HTTP responses from the object and will automatically cache the page or user control output. Note that this attribute is required. If you do not include it, a parser error occurs.
Location
This allows control of from where the client receives the cached document and should be one of the OutputCacheLocation enumeration values. The default is Any. This attribute is not supported for @OutputCache directives included in user controls. The enumeration values are:
Any: the output cache can be located on the browser client (where the request originated), on a proxy server (or any other server) participating in the request, or on the server where the request was processed.
Client: the output cache is located on the browser client where the request originated.
Downstream: the output cache can be stored in any HTTP 1.1 cache-capable devices other than the origin server. This includes proxy servers and the client that made the request.
None: the output cache is disabled for the requested page.
Server: the output cache is located on the Web server where the request was processed.
VaryByControl
A semicolon-separated list of strings used to vary the output cache. These strings represent fully qualified names of properties on a user control. When this attribute is used for a user control, the user control output is varied to the cache for each specified user control property. Note that this attribute is required in a user control @OutputCache directive unless you have included a VaryByParam attribute. This attribute is not supported for @OutputCache directives in ASP.NET pages.
VaryByCustom
Any text that represents custom output caching requirements. If this attribute is given a value of browser, the cache is varied by browser name and major version information. If a custom string is entered, you must override the HttpApplication.GetVaryByCustomString method in your application's Global.asax file. For example, if you wanted to vary caching by platform you would set the custom string to be ‘Platform’ and override GetVaryByCustomString to return the platform used by the requester via HttpContext.request.Browser.Platform.
VaryByHeader
A semicolon-separated list of HTTP headers used to vary the output cache. When this attribute is set to multiple headers, the output cache contains a different version of the requested document for each specified header. Example headers you might use are: Accept-Charset, Accept-Language and User-Agent but I suggest you consider the full list of header options and consider which might be suitable options for your particular application. Note that setting the VaryByHeader attribute enables caching items in all HTTP/1.1 caches, not just the ASP.NET cache. This attribute is not supported for @OutputCache directives in user controls.
VaryByParam
As already introduced this is a semicolon-separated list of strings used to vary the output cache. By default, these strings correspond to a query string value sent with GET method attributes, or a parameter sent using the POST method. When this attribute is set to multiple parameters, the output cache contains a different version of the requested document for each specified parameter. Possible values include none, *, and any valid query string or POST parameter name. Note that this attribute is required when you output cache ASP.NET pages. It is required for user controls as well unless you have included a VaryByControl attribute in the control's @OutputCache directive. A parser error occurs if you fail to include it. If you do not want to specify a parameter to vary cached content, set the value to none. If you want to vary the output cache by all parameter values, set the attribute to *.
Returning now to the programmatic alternative for Page Output Caching:
Response.Cache
As stated earlier @OutputCache is a higher-level wrapper around the HttpCachePolicy class exposed via the HttpResponse class. Thus all the functionality of the last section is also available via HttpResponse.Cache. For example, our previous code example can be translated as follows to deliver the same functionality:
output_caching_programmatic_example.aspx
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<html>
<head></head>
<body>
<a href="output_caching_programmatic_example.aspx?number=1">1</a>-
<a href="output_caching_programmatic_example.aspx?number=2">2</a>-
<a href="output_caching_programmatic_example.aspx?number=3">3</a>-
<a href="output_caching_programmatic_example.aspx?number=4">4</a>-
<a href="output_caching_programmatic_example.aspx?number=5">5</a>-
<a href="output_caching_programmatic_example.aspx?number=6">6</a>-
<a href="output_caching_programmatic_example.aspx?number=7">7</a>-
<a href="output_caching_programmatic_example.aspx?number=8">8</a>-
<a href="output_caching_programmatic_example.aspx?number=9">9</a>
<p>
<asp:Label id="lblTimestamp" runat="server" maintainstate="false" />
<p>
<asp:DataGrid id="dgProducts" runat="server" maintainstate="true" />
</body>
</html>
<script language="vb" runat="server">
const strConn = "server=localhost;uid=sa;pwd=;database=Northwind"
Sub Page_Load(sender as Object, e As EventArgs)
Response.Cache.SetExpires(dateTime.Now.AddSeconds(30))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.VaryByParams("number")=true
If Not Request.QueryString("number") = Nothing Then
lblTimestamp.Text = DateTime.Now.TimeOfDay.ToString()
dim SqlConn as new SqlConnection(strConn)
dim SqlCmd as new SqlCommand("SELECT TOP " _
& Request.QueryString("number") & " * FROM Products", SqlConn)
SqlConn.Open()
dgProducts.DataSource = SqlCmd.ExecuteReader(CommandBehavior.CloseConnection)
Page.DataBind()
End If
End Sub
</script>
The three lines of importance are:
Response.Cache.SetExpires(dateTime.Now.AddSeconds(30))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.VaryByParams("number")=true
It is only the third line you’ve not seen before. This is equivalent to VaryByParam="number" in the directive example. Thus you can see that the various options of the OutputCache directive are equivalent to different classes exposed by Response.Cache. Apart from the method of access the pertinent information is, unsurprisingly, very similar to that presented above for the directive version.
Thus, in addition to VaryByParams there is a VaryByHeaders class as well as a SetVaryByCustom method. If you are interested in the extra functionality exposed via these and associated classes I would suggest you peruse the relevant sections of the .NET SDK documentation.
Fragment Caching
Fragment caching is really a minor variation of page caching and almost all of what we’ve described already is relevant. The ‘fragment’ referred to is actually one or more user controls included on a parent web form. Each user control can have different cache durations. You simply specify the @OutputCache for the user controls and they will be cached as per those specifications. Note that any caching in the parent web form overrides any specified in the included user controls. So, for example, if the page is set to 30 secs and the user control to 10 the user control cache will not be refreshed for 30 secs.
It should be noted that of the standard options only the VaryByParam attribute is valid for controlling caching of controls. An additional attribute is available within user controls: VaryByControl, as introduced above, allowing multiple representations of a user control dependent on one or more of its exposed properties. So, extending our example above, if we implemented a control that exposed the SQL query used to generate the datareader which is bound to the datagrid we could cache on the basis of the property which is the SQL string. Thus we can create powerful controls with effective caching of the data presented.
Programmatic Caching: using the Cache Class to Cache Data
ASP.NET output caching is a great way to increase performance in your web applications. However, it does not give you control over caching data or objects that can be shared, e.g. sharing a dataset from page to page. The cache class, part of the system.web.caching namespace, enables you to implement application-wide caching of objects rather than page wide as with the HttpCachePolicy class. Note that the lifetime of the cache is equivalent to the lifetime of the application. If the IIS web application is restarted current cache settings will be lost.
The public properties and methods of the cache class are:
Public Properties
Count: gets the number of items stored in the cache.
Item: gets or sets the cache item at the specified key.
Public Methods
Add: adds the specified item to the Cache object with dependencies, expiration and priority policies, and a delegate you can use to notify your application when the inserted item is removed from the Cache.
Equals: determines whether two object instances are equal.
Get: retrieves the specified item from the Cache object.
GetEnumerator: retrieves a dictionary enumerator used to iterate through the key settings and their values contained in the cache.
GetHashCode: serves as a hash function for a particular type, suitable for use in hashing algorithms and data structures like a hash table.
GetType: gets the type of the current instance.
Insert: inserts an item into the Cache object. Use one of the versions of this method to overwrite an existing Cache item with the same key parameter.
Remove: removes the specified item from the application's Cache object.
ToString: returns a String that represents the current Object.
We'll now examine some of the above to varying levels of detail, starting with the most complex, the insert method:
Insert
Data is inserted into the cache with the Insert method of the cache object. Cache.Insert has 4 overloaded methods with the following signatures:
Overloads Public Sub Insert(String, Object)
Inserts an item into the Cache object with a cache key to reference its location and using default values provided by the CacheItemPriority enumeration.
Overloads Public Sub Insert(String, Object, CacheDependency)
Inserts an object into the Cache that has file or key dependencies.
Overloads Public Sub Insert(String, Object, CacheDependency, DateTime, TimeSpan)
Inserts an object into the Cache with dependencies and expiration policies.
Overloads Public Sub Insert(String, Object, CacheDependency, DateTime, TimeSpan, CacheItemPriority, CacheItemRemovedCallback)
Inserts an object into the Cache object with dependencies, expiration and priority policies, and a delegate you can use to notify your application when the inserted item is removed from the Cache.
Summary of parameters:
String |
the name reference to the object to be cached |
Object |
the object to be cached |
CacheDependency |
file or cache key dependencies for the new item |
Datetime |
indicates absolute expiration |
Timespan |
sliding expiration – object removed if greater than timespan after last access |
CacheItemPriorities |
an enumeration that will decide order of item removal under heavy load |
CacheItemPriorityDecay |
an enumeration; items with a fast decay value are removed if not used frequently |
CacheItemRemovedCallback |
a delegate that is called when an item is removed from the cache |
Picking out one of these options for further mention: CacheDependency. This attribute allows the validity of the cache to be dependent on a file or another cache item. If the target of such a dependency changes, this can be detected. Consider the following scenario: an application reads data from an XML file that is periodically updated. The application processes the data in the file and represents this via an aspx page. Further, the application caches that data and inserts a dependency on the file from which the data was read. The key aspect is that when the file is updated .NET recognizes the fact as it is monitoring this file. The programmer can interrogate the CacheDependency object to check for any updates and handle the situation accordingly in code.
Remove
Other methods of the cache class expose a few less parameters than Insert. Cache.Remove expects a single parameter – the string reference value to the Cache object you want to remove.
Cache.Remove(“MyCacheItem”)
Get
You can either use the get method to obtain an item from the cache or use the item property. Further, as the item property is the default property, you do not have to explicitly request it. Thus the latter three lines below are equivalent:
Cache.Insert(“MyCacheItem”, Object)
Dim obj as object
obj = Cache.get(“MyCacheItem”)
obj = Cache.Item("MyCacheItem")
obj = Cache(“MyCacheItem”)
GetEnumerator
Returns a dictionary (key/ value pairs) enumerator enabling you enumerate through the collection, adding and removing items as you do so if so inclined. You would use as follows:
dim myEnumerator as IDictionaryEnumerator
myEnumerator=Cache.GetEnumerator()
While (myEnumerator.MoveNext)
Response.Write(myEnumerator.Key.ToString() & “<br>”)
'do other manipulation here if so desired
End While
An Example
To finish off with an example, we’ll cache a subset of the data from our earlier examples using a cache object.
cache_class_example.aspx
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<html>
<head></head>
<body>
<asp:datagrid id="dgProducts" runat="server" maintainstate="false" />
</body>
</html>
<script language="vb" runat="server">
public sub Page_Load(sender as Object, e as EventArgs)
const strConn = "server=localhost;uid=sa;pwd=;database=Northwind"
dim dsProductsCached as object = Cache.Get("dsProductsCached")
if dsProductsCached is nothing then
Response.Write("Retrieved from database:")
dim dsProducts as new DataSet()
dim SqlConn as new SqlConnection(strConn)
dim sdaProducts as new SqlDataAdapter("select Top 10 * from products", SqlConn)
sdaProducts.Fill(dsProducts, "Products")
dgProducts.DataSource = dsProducts.Tables("Products").DefaultView
Cache.Insert("dsProductsCached", dsProducts, nothing, _
DateTime.Now.AddMinutes(1), TimeSpan.Zero)
else
Response.Write("Cached:")
dgProducts.DataSource = dsProductsCached
end if
DataBind()
end sub </script>
The important concept here is that if you view the above page, then within 1 minute save and view the same page after renaming it, you will receive the cached version of the data. Thus the cache data is shared between pages/ visitors to your web site.
Wrapping matters up
A final few pointers for using caching, largely reinforcing concepts introduced earlier, with the latter two applying to the use of the cache class:
- Don't cache everything: caching uses memory resources - could these be better utilized elsewhere? You need to trade-off whether to regenerate items, or store them in memory.
- Prioritise items in the cache: if memory is becoming a limited system resource .NET may need to release items from the cache to free up memory. Each time you insert something into the cache, you can use the overloaded version of Insert that allows you to indicate how important it is that the item is cached to your application. This is achieved using one of the CacheItemPriority enumeration values.
- Configure centrally. To maximize code clarity and ease of maintenance store your cache settings, and possibly also instantiate your cache objects, in a key location, for example within global.asax.
I hope this article has served as a reasonably complete consideration of the caching capabilities available within ASP.NET and you are now aware, if you were not before, of the considerable possible performance savings available reasonably simply via the provided functionality. If you have any comments on the article, particularly if you believe there are any errors that should be corrected let me know at chris.sully@cymru-web.net.
References
ASP.NET: Tips, Tutorial and Code
Scott Mitchell et al.
Sams
Professional ASP.NET
Sussman et al.
Wrox
.NET SDK documentation
Various online articles
You may run output_caching_directive_example.aspx here.
You may run output_caching_programmatic_example.aspx here.
You may run cache_class_example.aspx here.
You may download the code here.
Note that this article was first published on 02/01/2003. The original article is available on DotNetJohn, where the code is also available for download and execution.
Original abstract: XSLT, XPATH and how to apply the concepts in .NET. Examines the concept of transformation and how an XSLT stylesheet defines a transformation by describing the relationship between an input tree and an output tree. Continues to look at the structure of a stylesheet, its main sub-components and introduces examples of what you might expect to see therein. Finally, the article examines how to utilise XSLT stylesheets in .NET.
Knowledge assumed: reasonable understanding of XML and ASP.NET / VB.NET.
Introduction
XML represents a widely accepted mechanism for representing data in a platform-neutral manner. XSLT is the XML based language that has been designed to allow transformation of XML into other structures and formats, such as HTML or other XML documents. XSLT is a template-based language that works in collaboration with the XPath language to transform XML documents.
Note that not all applications are suited to such an approach though there are benefits to be derived in all but the simplest problem domains. Suitable applications for implementation with XML/ XSLT are
- those that require different views of the same data – hence delivering economies of scale to the developer/ organisation.
- those where maintaining the distinction between data and User Interface elements (UI) is an important consideration – for example for facilitating productivity through specialisation within a development team.
.NET provides an XSLT processor which can take as input XML and XSLT documents and, via matching nodes with specified output templates, produce an output document with the desired structure and content.
I’ll examine the processor and the supporting classes as far as XSLT within .NET is concerned in the latter half of this article. First, XSLT:
XSLT
I’m only going to be able to scratch the surface of the XSLT language here but I shall attempt to highlight a few of the key concepts. It is important to remember that XSLT is a language in its own right and, further, it is one in transition only having been around for a few years now. It’s also a little different in mechanism to most you may have previously come across. XSLT is basically a declarative pattern matching language, and as such requires a different mindset and a little getting used to. It’s (very!) vaguely like SQL or Prolog in this regard. Saying that, there are ways to ‘hook in’ more conventional procedural code.
If its not too late, now would be a good time to get round to stating what the acronym XSLT stands for: eXtensible Stylesheet Language: Transformations. XSLT grew from a bigger language called XSL – as it developed the decision was made to split XSL into areas corresponding to XSLT for defining the structural transformations, and ‘the rest’ which is the formatting process of rendering the output. This may commonly be as pixels on a screen, for example, but could also be several other alternatives. ‘The rest’ is still officially called XSL, though has also come to be known as XSL-FO (XSL Formatting objects). That’s the last time we’ll mention XSL-FO.
As XSLT developed it became apparent that there was overlap between the expression syntax in XSLT for selecting parts of a document (XPath), and the XPointer language being developed for linking one document to another. The sensible decision was made to define a single language to undertake both purposes. XPath acts as a sub-language within an XSLT stylesheet. An XPath expression may be used for a variety of functions but typically it is employed to identify parts of the input XML document for subsequent processing. I’ll make no significant further effort in the following discourse to emphasise the somewhat academic distinction between XPath and XSLT, the former being such an important, and integral, component of the latter.
A typical XSLT stylesheet consists of a sequence of template rules, defining how elements should be processed when encountered in the XML input file. In keeping with the declarative nature of the XSLT language, you specify what outputs should be produced by particular input patterns, as distinct from a procedural model where you define the sequence of tasks to be performed.
A tree model similar to the XML DOM is employed by both XSLT and XPath. The different types existing in an XML document can be represented by different types of node in a tree view. In XPath the root node is not an element, the root is the parent of the outermost element, representing the document as a whole. The XSLT tree model can represent every well-formed XML document, as well as documents that are not well formed according to the W3C.
An XPath tree is made up 7 types of node largely corresponding to elements in the XML source document: root, element, text, attribute, comment, processing instruction and namespace. Each node has metadata created from the source document, in accordance with the type of node under consideration. Considering the node type in a little more detail:
As already mentioned the root node is a singular node that should not be confused with the document element – an outermost element that contains all elements in a valid XML document.
Element and attribute refer to your XML entities, e.g.
<product id=’1’ type=’book’>XSLT for Beginners</product>
product is an element and id and type are attributes.
Comments nodes represent comments in the XML source written between <!-- and -->. Similarly processing instructions are represented in thw XML source between <? and ?> tags. Note, however, that the XML commonly found as the first element of the XML document is only impersonating a processing instruction – it is not represented as a node in the tree.
A text node is a sequence of characters in the PCDATA (‘parsed character’ data) part of an element.
The XML source tree is converted to a result tree via transformation by the XSLT processor using the instructions of the XSL stylesheet. Time for an example or two:
Most stylesheets contain a number of template rules of the form:
<xsl:template match="/">
<xsl:message>Started!</xsl:message>
<html>
. . . do other stuff . . .
</html>
</xsl:template>
where the . . . do other stuff . . . might contain further template bodies to undertake further processing, e.g.
<xsl:template match="/">
<xsl:message>Started!</xsl:message>
<html>
<head></head>
<body>
<xsl:apply-templates>
</body>
</html>
</xsl:template>
As previously stated, both the input document and output document are represented by a tree structure. So, the <body> element above is a literal element that is simply copied over from the stylesheet to the result tree.
<xsl:apply-templates/> means select all the children of the current node in the source tree, finding the matching template rule for each one in the stylesheet and apply it. The results depend on the content of both the stylesheet and the XML document under consideration. Actually, if there is no template for the root node, the built in template is invoked which processes all the children of the root node.
Thus, the simplest way to process an XML document is to write a template rule for each kind of node that might be encountered, or at least that we are interested in and want to process. This is an example of ‘push’ processing and can be considered to be similar logically to Cascading StyleSheets (CSS) where one document defines the structure (HTML/ XML), and the second (the stylesheet) defines the appearance within this structure. The output is conditional on the structure of the XML document.
Push processing works well when the output is to have the same structure and sequence of data as the input, and the input data is predictable.
Listing 1: simple XML file: books.xml
<?xml version="1.0"?>
<Library>
<Book>
<Title>XSLT Programmers Reference</Title>
<Publisher>Wrox</Publisher>
<Edition>2</Edition>
<Authors>
<Author>Kay, Michael</Author>
</Authors>
<PublishedDate>April 2001</PublishedDate>
<ISBN>1-816005-06-7</ISBN>
</Book>
<Book>
<Title>Dynamical systems and fractals</Title>
<Publisher>Cambridge University Press</Publisher>
<Authors>
<Author>Becker, Karl-Heinz</Author>
<Author>Dorfler, Michael</Author>
<Author>David Sussman</Author>
</Authors>
<PublishedDate>1989</PublishedDate>
<ISBN>0-521-36910-X</ISBN>
</Book>
</Library>
Listing 2: Example of push processing of books.xml: example1.xslt
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="Library">
<html>
<head></head>
<body>
<h1>Library</h1>
<table border="1">
<tr>
<td><b>Title</b></td>
<td><b>PublishedDate</b></td>
<td><b>Publisher</b></td>
</tr>
<xsl:apply-templates/>
</table>
</body>
</html>
</xsl:template>
<xsl:template match="Book">
<tr>
<xsl:apply-templates select="Title"/>
<xsl:apply-templates select="PublishedDate"/>
<xsl:apply-templates select="Publisher"/>
</tr>
</xsl:template>
<xsl:template match="Title | PublishedDate | Publisher ">
<td><xsl:value-of select="."/></td>
</xsl:template>
</xsl:stylesheet>
Note that in the book template I’ve used <xsl:apply-templates select=" … rather than just <xsl:apply-templates … This is because there is data in the source XML in which we are not interested and if we just let the built in template rules do their stuff the additional data would be copied across to the output tree. I’ve already mentioned the existence of built in template rules: when apply-templates is invoked to process a node and there is no matching template rule in the stylesheet a built in template rule is used, according to the type of the node. For example, for elements apply-templates is called on child nodes and for text and element nodes the text is copied over to the result tree. Try making the modification and viewing the results.
Using the select attribute of apply-templates is one solution - being more careful about which nodes to process (rather than just saying ‘process all children of the current node). Another is to be more precise about how to process them (rather than just saying ‘choose the best-fit template rule). This is termed ‘pull’ processing and is achieved using the value-of command:
<xsl:value-of select=”price” />
In this alternative pull model the stylesheet provides the structure and the document acts wholly as a data source. Thus a ‘pull’ version of the above example would be:
Listing 3: Example of push processing of books.xml: example2.xslt
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<xsl:apply-templates/>
</xsl:template>
<xsl:template match="Library">
<html>
<head></head>
<body>
<h1>Library</h1>
<table border="1">
<tr>
<td><b>Title</b></td>
<td><b>PublishedDate</b></td>
</tr>
<xsl:apply-templates/>
</table>
</body>
</html>
</xsl:template>
<xsl:template match="Book">
<tr>
<td><xsl:value-of select="Title"/></td>
<td><xsl:value-of select="PublishedDate"/></td>
</tr>
</xsl:template>
</xsl:stylesheet>
These two examples are not hugely different but it is quite important you understand the small but important differences for future situations when you encounter more complex source documents and stylesheets. You can rely on the structure of the XML source document using template matching (push) or explicitly select elements, pulling them into the output document.
Other commands/ points worthy of note at this juncture (there are hundreds more for you to explore) are:
<xsl: for-each> which as you might guess, performs explicit processing of each of the specified nodes in turn.
<xsl: call-templates> invokes a specific template by name, rather than relying on pattern matching.
<xsl: apply-templates> can also take a mode attribute which allows you to make multiple passes through the XML data representation.
I’ve briefly introduced the basic XSLT concepts and, in particular, the push and pull models. The pull model is characterised by a few large templates and use of the <xsl:value-of> element so that the stylesheet controls the order of items in the output. In comparison the push model tends more towards smaller templates with the output largely following the structure of the XML source document.
I mentioned earlier that XSLT is often thought of as a declarative language. However, it also contains the flow control and looping instructions consistent with a procedural language. Typically, a push model stylesheet emphasizes the declarative aspects of the language, while the pull model emphasizes the procedural aspects.
Note the use of the word ‘typical’ - most stylesheets will contain elements of both push and pull models. However, it is useful to keep the two models in mind as it can make your stylesheet development simpler.
There you have it – we’ve scratched the surface of the XSLT and XPath languages and I’ll leave you to explore further. Both Wrox and O’Reilly have several books on the subject that have been well reviewed … take your pick if you want to delve deeper. Let me know if you’d like me to write another article on XSLT, building on what I’ve introduced here.
Time to see what .NET has to offer.
XSLT in .NET
First point of note: you can perform XSLT processing on the server or client (assuming your client browser has an XSLT processor). The usual client vs. server arguments pervade here: chiefly you’d like to utilise the processing power of the client machine rather than tying up server resources but can you be sure the client browser population is fit for purpose? If the answer to the latter is yes – the main requirement being that the XSLT you’ve written doesn’t generate errors in the client browser processor – then you can simply reference the XSLT stylesheet from the XML file and the specified transformation will be undertaken.
Returning to the server side processing options: you won’t be surprised to learn that it is the system.xml namespace where the classes and other namespaces relating to XSLT are found. The main ones are:
1. XpathDocument (system.xml.xpath)
This provides the faster option for XSLT transformation as it provides read only, cursor style access to XML data via its DOM. It has no public properties or methods to remember but does have several different constructors by which it accepts the following objects: XmlTextReader, textReader, stream and string path to an XML document.
2. XslTransform (system.xml.xsl)
This is the XSLT processor and hence the key class of interest to us. Three main steps to utilise: instantiate the transform object, loads the XSLT document into it and then transform the required XML document (accessed via the XPathDocument object created for the purpose).
3. XsltArgumentList:
Allows provision of parameters to XslTransform. XSLT defines a xsl:param element that can be used to hold information passed into the stylesheet from the XSLT processor. XslTArgumentList is used to achieve this.
Also of direct relevance are: XmlDocument and XmlDataDocument but I won’t be considering them further here … I’ll leave this to your own investigation.
Going to go straight to a simple example showing 1 and 2 and above in action:
Listing 4: .NET example: Transform.aspx
<%@ Page language="vb" trace="false" debug="false"%>
<%@ Import Namespace="System.Xml" %>
<%@ Import Namespace="System.Xml.Xsl" %>
<%@ Import Namespace="System.Xml.XPath" %>
<%@ Import Namespace="System.IO" %>
<script language="VB" runat="server">
public sub Page_Load(sender as Object, e as EventArgs)
Dim xmlPath as string = Server.MapPath("books.xml")
Dim xslPath as string = Server.MapPath("example2.xslt")
Dim fs as FileStream = new FileStream(xmlPath,FileMode.Open, FileAccess.Read)
Dim reader as StreamReader = new StreamReader(fs,Encoding.UTF8)
Dim xmlReader as XmlTextReader = new XmlTextReader(reader)
'Instantiate the XPathDocument Class
Dim doc as XPathDocument = new XPathDocument(xmlReader)
'Instantiate the XslTransform Class
Dim xslDoc as XslTransform = new XslTransform()
xslDoc.Load(xslPath)
xslDoc.Transform(doc,nothing,Response.Output)
'Close Readers
reader.Close()
xmlReader.Close()
end sub
</script>
As you can see this example uses the stylesheet example2.xsl as introduced earlier. Describing the code briefly: on page load strings are defined as the paths to the input files in the local directory. A FileStream object is instantiated and the XML document loaded into it. From this a StreamReader object is instantiated, and in turn a XmlTextReader from this. The DOM can then be constructed within the XPathDocument object from the XML source via the objects so far defined. We then need to instantiate the XSLTransform object, load the stylesheet as defined by the string xslPath, and actually call the transform method. The parameters are the XPathDocument object complete with DOM constructed from the XML document, any parameters passed to the stylesheet – none in this case, and the output destination of the result tree.
ASP.NET also comes complete with the ASP:Xml web control, making it easy to perform simple XSLT transformations in your ASP.NET pages. Use is as per any other web control, you simply supply the 2 input properties (DocumentSource and TransformSource) as parameters, either declaratively or programmatically. Here’s an example that does both, for demonstration and clarification purposes:
Listing 5: ASP:xml web control: Transform2.aspx
<%@ Page language="vb" trace="true" debug="true"%>
<script language="vb" runat="server">
sub page_load()
xslTrans.DocumentSource="books.xml"
xslTrans.TransformSource="example2.xslt"
end sub
</script>
<html>
<body>
<asp:xml id="xslTrans" runat="server"
DocumentSource="books.xml" TransformSource="example2.xslt" />
</body>
</html>
Lastly, just to leave you with the thought that the place of XML/XSLT technology in the ASP.NET model is not clear-cut, as the server controls generate their own HTML. Does this leave XSLT redundant? Well, no … but we may need to be a little more creative in our thinking. For example, the flexibility of XML/XSLT can be combined with the power of ASP.NET server controls by using XSLT to generate the server controls dynamically, thus leveraging the best of both worlds. Perhaps I’ll leave this for another article. Let me know if you are interested.
References:
ASP.NET: Tips, tutorials and code Sams
XSLT Programmers Reference 2nd Edition Wrox
Professional XSL Wrox
Various Online Sources
You may run Transform.aspx by clicking Here.
You may run Transform2.aspx by clicking Here.
You may download the code by clicking Here.