Multitier Data Management Architecture

I recently came across a document that I wrote a couple of years ago, detailing what I believed to be – at the time – the milestone that all tech companies seem to be driving towards, whether they have realized it or not. Some companies (and even individuals) don’t look at an ultimate goal, they just work incrementally (“going with the flow”) and from there they decide to build their systems, while others rather structure their thoughts to reach an end-goal and thus detail the steps to get there.

Anyways, if you were to observe the steps that companies such as Apple, with their mobile operating system iOS, and Google, with the Chromium project, have taken, as well as Microsoft with their upcoming release of Windows 8, you may begin to realize a trend. It was Funny enough when I came to realize that they are all heading in the same direction as the data management architecture that I had detailed in this document that I wrote months ago. The catch is that each company focused a lot more on just a couple of the layers presented than the whole architecture, but I still believe that they all seem to be moving more or less in this same direction.

The architecture comprises of a layered structure (analogous to the 7-layer OSI model, universally used for present-day computer networks); you would start at the very bottom, a.k.a. the physical layer, and data would work its way up the layers until it is presented/manipulated to/by the end-user at the presentation layer. Read through the following parts for a description of what each layer is responsible for and how they interact with/present data between one another.

I know this may seem counter-intuitive, but I believe it makes more sense to read it from the bottom up, i.e. start off with the physical layer and work your way up.

Multitier Data Management Architecture

Presentation Layer

This is the full frontend of how the data will be presented and manipulated by the user; this entails graphics, tables, lists of objects (<= files with their metadata).

Content Management Layer

Here we work on the organization methods of the data, based on metadata reference tags and properties. This can be compared to the filesystem layer below; except it is to be used for the organization of the data based on the nature of the data itself. This counters the philosophy used in the “File-Access Layer”, below, where we said that all files are generic. The logic here is that with respect to the layers below the “File-Access Layer”, data is considered generic, just bits, 1’s and 0’s. However once we start moving up, getting closer to the user (at the presentation layer), then the actual content to be presented for each file becomes the important factor. The whole goal behind this layer would be to organize all files in the system so that when the user or a certain application requests a certain piece of information, this layer will be able to direct it to the correct file.

Metadata Layer           

Working with every file there will be attributes that are assigned as metadata to each file, to be used for referencing (indexing, searching, sorting, categorizing, etc.) in higher layers, while maintaining optimal access and storage methods at lower layers. At this layer, we go past the generic file philosophy used today, where we actually start to pay attention to the nature of the content of each file, and then treat it as such.

File-Access Layer

The highlight of this layer would be to stress the need for “files” on the system, with each file being mapped to a set of attributes that define it, such as size and type (which is part of the metadata layer, above). This layer enforces the concept of the generic file, which in turn exemplifies that all data is treated identical without any special treatment, as far as the lower layers are involved, however once you start moving up the stack, you will realize how each file is given a purpose strictly for its use, depending on its type. This philosophy would be similar to a method of “objectifying” files, whereby one would no longer work with files, but rather with generic objects.

Access Control Layer

The security layer for access privileges of each and every user and group of users. Access control should not be handled as a filesystem-specific entity, it should be a universal method of authentication that is applicable to all platforms and thus it will help computers and electronic devices play a lot better together. Following logic, security specialists can focus their efforts on this layer, without having to worry about the inner-workings of a computer filesystem.

Filesystem Layer

The way the data is mapped out on the storage media/pool. The most important detail to pay attention to at this layer is that the filesystem should be independent of the underlying layers. An example of this would be Sun Microsystems’s ZFS 128-bit filesystem, which can be used to create storage pools on a various array of drive types, as long as they are writable media. In some circles this is referred to as hardware virtualization.

e.g. NTFS, FAT, ZFS, HFS, EXT3, GFS, CFS, etc.

Media Access Control Layer

The architecture of interconnecting the media being used for storage and the respective communication protocol(s) being used to transport data to/from them.

e.g. iSCSI, ATAoE, SATA, SATA-II, IDE, USB, Firewire, etc.

Physical Layer

The media being used to store your data; we separate the physical medium being used from the logic of the OS. What can be done here is that a certain set of predefined metrics can be used to categorize different media of storage, despite their actual physical nature. Using this logic, there may be an overlap with devices such as low-class SSDs and high-performing flash drives, but this should be irrelevant to the user/OS since the device (irrespective of its nature) is acting in a certain manner.

e.g. HDDs, SSDs, CD, DVD, Blu-ray, Tape, etc.

If you actually took the time to read this, you would probably be thinking that a lot of this already exists, and I agree that it does to some extent or another, however it does not exist entirely, and not to this extent in modularity and flexibility.

Essentially by following this model, manufacturers, developers and engineers in the industry would be able to separate the layers as they have done in the networking world and in turn tackle each layer on its own to increase performance and apply elements such as multi-layer security… furthermore, now that we are seeing a mesh between the PC and the cloud, networked systems are becoming more and more abundant… and by adopting such an architecture the move can be made seamless to the end-user, whereby they wouldn’t have to consider what is happening under the hood, as it’s getting done and it’s getting done well.

All comments and discussions are welcome.

Creative Commons License
Multitier Data Management Architecture by John Laham is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Permissions beyond the scope of this license may be available at this link.

Advertisements

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s