Website Design
Design Considerations
Theme/Content
Website Structure
Page Layout
Aesthetics
Content vs Style
Maintainability
Tech Requirement
Future Scalability
Website Security
Browser Support
Graceful Degradation
Accessibility
Next section...
Webpage Coding

Website Design Considerations

Future Scalability

For a website which will only have a few pages, and which will never grow there is little need to worry about any of the considerations discussed so far. The effort of setting up SSIs, JS, CSS and linking all of the external files isn't really justified for just a few pages.

But as the site gets larger this rapidly changes, a break even point is reached very soon at which this option becomes the easier one.

Very soon after that a further point is reached at which the site will be unmaintainable without using this method, this can be as few as a dozen pages.

If you are looking at an existing website with a view to enlarging it past this threshold then you should look carefully at its structure. If it has not been designed with these principles in mind then in the long term you may be better of cutting the existing files down so that they use these methods, implementing the new pages will then be a simple process of copying templates and pasting in specific content.

For very large websites even this can get over-large. Particularly for websites which display archive information, or records etc and have many hundreds of self-similar pages a more technical solution may be in order.

The page files themselves will always have the same information in them, only the actual content will differ. Although the files are stored as efficiently as possible using SSIs and included JS and CSS scripts, to reduce the amount of repeated code, this volume will still start to add up. Instead of storing the entire pages within the files consider storing just the content blocks specific to each page without any of the rest of the page HTML. Create an additional data file which will contain the required information about these page fragments, usually some sort of reference code or id and write a CGI program which will take a reference to a specific item as an argument and will output the full HTML page including the desired content.

This is the beginning of a content management system, albeit a simple one, however there is no limit to how much can be stored like this or in the types of data structure than can be built. Perl is extremely fast and can parse even quite large text files into memory very quickly.

Even this method can get complex once the data file itself becomes too large. Every time the program runs it has to read through all of the data to find whatever it is seeking. As the file gets larger this inefficiency becomes more significant and at this juncture the solution is to do away with the data file as a text file and instead store the information within a database, and reference this database directly from the CGI programs.

Using the database the CGI program can directly acquire the particular values required, the combination of MySQL and Perl is very adept at this and for large data sets of many thousands of items it is significantly faster than reading the data from a text file.

Show Style-Switcher...