Your Choice

Tuesday 12 June 2012

petrol unformation


Petrol 

General information 

Key Points 

Fire 

• Highly flammable 

• Mixtures of petrol vapour and air may explode  

• In the event of a fire involving petrol, use normal foam and normal fire kit with 

breathing apparatus 

Health 

• Serious lung injury may occur if droplets of petrol are inhaled (e.g. if vomiting occurs 

after ingestion) 

• Harmful 

• Inhalation may cause headache, dizziness and drowsiness.  

• Often no symptoms occur following ingestion. In some cases, sickness and diarrhoea 

may occur 

• Petrol vapour may be irritating to the eyes and lungs 

• Prolonged skin exposure to petrol may cause a variety of skin conditions  

• Long-term exposure to high levels of petrol is associated with a range of disorders 

affecting the nervous system 

• Petrol does not affect human reproduction or development 

• There is currently no evidence that petrol causes cancer in humans  

Environment 

• Avoid release into the environment 

• Inform the Environment Agency of substantial releases 

I'll add a link to the slides here shortly.Thursday 1st March ... I have just completed delivering a presentation on Data Governance at Ovum London conference http://governance-planning.ovumevents.com/

Data Governance; a vital component of IT Governance


I'm sure that we all know that data is growing at a vast rate, however there's been an even bigger problem concering uncontrolled growth that I have recently read about ...
 ... 12 Grey Rabbits were brought to Autralia in the 19th Century for sport.  After 2 years there were in excess of 2million per year were being shot & still the population wasnt dented.  A few years later the population was over 400million.  So, even the Data explosion highlighted by the 2011 IDC Digital Universe study hasn't yet reached these proportions.

IT Governance:
From several of the well established frameworks (eg ITIL), the common key components of an IT Governance framework seem to be:

1) Strategic Alignment: 
Alignment of the business and IT strategy with regard to the definition as well as the review of and improvement in IT’s contribution to value.
2) Value Delivery:
Within their service cycle, IT services in their entirety bring a benefit in respect of the corporate strategy and generate added value for the enterprise.
3) Resource Management:
Efficient management of resources such as applications, information, infrastructure and people, as well as optimization of the investment.
4) Risk Management:
Identification and analysis of the risks in order to avoid unpleasant surprises and to gain a clear understanding of the company’s risk preference.
5) Performance Management:
Monitoring and control of all performances in terms of their orientation towards the corporate strategy.

Looking now at  Data Governance, some of the key areas that need to be considered, certainly to folks more used to IT Governance are:

1) There are usually 3 main drivers for Data Governance:
Pre-emptive:  Where organisations are facing a major change or threats. Designed to ward off significant issues that could affect success of the company.
Reactive: Where efforts are designed to respond to current pains
Pro-active:  Where Governance efforts designed to improve capabilities to resolve risk and data issues. This builds on reactive governance to create an ever-increasing body of validated rules, standards, and tested processes.

2) Data Governance can be implemented in 3 ways, often these may overlap (Tactical, Operational, Strategic).

3) There is certainly no "one size fits all" approach to Data Governance. Need to have a flexible approach to Data Governance that delivers maximum business value from its data asset.
Data Governance can drive massive benefit, however to accomplish this there needs at least to be  reuse of data, common models, consistent understanding, data quality, and shared master and reference data.
Organisationally, different parts of the business have different needs, and different criteria for their data.  A matrix approach is needed  do these different parts of the organisation and data types are  driven from different directions.
However, no matter how federated the organisation may be there will be some degree of central organization required.  This is to drive Data Governance adoption, implement corporate repositories and establish corporate standards
The IPL Business Consulting practice have a flexible DG framework that can be tailored to help.

4) Communication & stakeholder engagement is key.  No matter how brilliant the framework is, or how great your polices or DG council are, if you dont adequately engage and communicate with the stakeholders, the DG initiative will go nowhere.

5) Finally, all of this is only important if Information REALLY is a key corporate asset for your organisation ..... so ask yourself, is it?

So IT Governance vs. Data Governance?
In summary, Data Governance is a vital frequently overlooked component of an overall IT Governance approach.  Remember the 5 commmon components of an IT Governanceapproach?  Well, lets apply these in a Data Governance context and we see ...

1) Strategic Alignment:
Alignment of the business information needs and the IT methods and processes for delivering information that is fit for purpose.
2) Value Delivery:
Delivering information to the requisite quality, time, completeness and accuracy levels and  optionally monetising the value of information.
3) Resource Management:
Ensuring people, and technology resources are optimised to ensure definition, ownership, and delivery of information resources meet business needs.
4) Risk Management:
Information security, backup & retention and delivery are balanced against regulatory and accessibility needs as befits the company’s risk preference.
5) Performance Management:
Monitoring and control of Data Governance roles, responsibilities and workflows such that they meet the demands of the corporate strategy.

Data Virtualisation As An Approach To Data Integration

Many different approaches are now available for Data Integration, yet far and away the most popular approach currently still remains as Extract Transform and Load (ETL).
However the pace of Business change and the requirement for agility demands that organizations support multiple styles of data integration.

Three leading options present themselves; let’s now describe the differences among the three major styles of integration.

1.        Physical Movement and Consolidation

Probably the most commonly used approach is physical data movement.  This is used when you need to replicate data from one database to another. There are two major genres of physical data movement, Extract Transform & Load (ETL) and Change Data Capture (CDC). 
ETL is typically run according to a schedule and is used for bulk data movement, usually in in batch.  CDC is event driven and delivers real-time incremental replication.  Example products in these areas are Informatica (ETL) and GoldenGate (CDC).


 2.        Message based synchronization & propagation

Whilst ETL and CDC are Database to Database integration approaches, the next approach, message based syncronisation and data propogation is used for application to application integration.  Once again there are two main genres, Enterprise Application Integration (EAI) and Enterprise Service Bus (ESB) approaches, but both of these are used primarily for the purpose of event driven business process automation.  A leading product example in this area is the ESB from Tibco.

 3.        Abstraction / Virtual Consolidation (aka Federation)

Thirdly you have Data Virtualization (DV).  The key here is that the data source (usually a database), and the target or consuming application (usually a business application) are isolated from each other. The information is delivered on-demand, to the Business Application when the user needs it.  The consuming business application can consume the data as though it were a database table, a star schema, an XML message or in many other forms.  The key point with a DV approach is that the form of the underlying source data is isolated from the consuming application.  The key rationale for Data Virtualization within an overall Data Integration strategy is to overcome complexity, increase agility and reduce cost.  A leading product example in this area is Composite Software.

ETL or DV?
The suitability of Data Integration approaches needs to be considered for each case.  Here are 6 key considerations to ponder:

1. Will the data be replicated in both the DW and the Operational System?

      Will data need to be updated in one or both locations?
      If data is physically in two locations beware of regulatory & compliance issues associated with having additional copies of the data (e.g. SoX, HIPPA, BASEL2, FDA etc)

2. Data Governance

      Is the data only to be managed in the originating Operational System?

      What is the certainty that a DW will be a reporting DW only
(vs Operational DW)?

3. Currency of the data, i.e. Does it need to be up to the minute?

      How up to date are the data requirements of the DW?
      Is there a need to see the operational data?

4. Time to solution i.e. how quickly is the solution required?

      Immediate requirement?
      Confirmed users & usage?

5. What is the life expectancy of source system(s)?
      Are any of the source systems likely to be retired?
      Will new systems be commissioned?
      Are new sources of data likely to be required?

6. Need for historical / summary / aggregate data
      How much historical data is required in the DW solution?
      How much aggregated / summary data is required in the DW solution?

 Leading analyst firms like Gartner are recommending that data virtualization be added to your integration tool kit, and that you should use the right style of data integration for the job for optimal results. 
 Just like so many things in Infromation MAnagement - there's more than way way to accomplish Data Integration; ETL is not the only way.  Data Virtualisation is well worth considering a a part of your overall strategy.  

A recent (June 2011) IDC Digital Universe study found that the world's data is doubling every two years—this is growing faster than Moore's Law.  It reckoned that 1.8 zettabytes (1.8 trillion gigabytes) will be created and replicated in 2011 and that Enterprises will manage 50X more Data and Files will Grow 75X in the Next Decade.
The “big data” phenomenon is driving transformational, technological, scientific, and economic changes and "Information taming" technologies are driving down the cost of creating, capturing, managing and storing information

We’ve all seen how organisations have an insatiable desire for more data as they believe that this information will radically change their businesses.

They are right – but it’s only the effective exploitation of that data, turning it into really useful information and then into knowledge & applied decision making that will realise the true potential of this vast mountain of data.

Incidentally, do you have any idea how much data 1.8 zettabytes really is?  It’s about the same amount of data if every person in the world sent twenty tweets an hour for the next 1200 years!

Data by itself is useless, it has to be turned into useful information & then have effective business intelligence applied to realise its true potential.

The problem is that big data analytics push the limit of traditional data management.  Allied to this the most complex big data problems start with huge volumes of data in disparate stores with high volatility of data.  Big data problems aren’t just about volume though; there’s also the volatility of the data sources & rate of change, the variety of the data formats and the complexity of the individual data types themselves.  So is it always the most appropriate route to pull all this data into yet another location for its analysis? 

Unfortunately though many organisations are constrained by traditional data integration approaches that can slow adoption of big data analytics.  

Approaches which can provide high performance data integration to overcome data complexity & data silos will be those which win through.  These need to integrate the major types of “big data” into the enterprise.  The typical “big data” sources include:
  • Key/value Data Stores such as Cassandra,
  • Columnar/tabular NoSQL Data Stores such as Hadoop & Hypertable,
  • Massively Parallel Processing Appliances such as Greenplum & Netezza,  and
  • XML Data Stores such as CouchDB & MarkLogic.
Fortunately approaches such as Data Federation / Data Virtualisation are stepping up to meet this challenge.

Finally & of utmost importance is managing the quality of the data.  What’s the use of this vast resource if its quality and trustworthiness is questionable?  Thus, driving your data quality capability up the maturity levels is key.

Data Quality Maturity – 5 levels of maturity
Level 1 - Initial
Level 2 - Repeatable
Level 3 - Defined
Level 4 - Managed
Level 5 - Optimised
Limited awareness within the enterprise of the importance of information quality.  Very few, if any, processes in place to measure quality of information. Data is often not trusted by business users.
The quality of few data sources is measured in an ad hoc manner. A number of different tools used to measure quality. The activity is driven by a projects or departments.   Limited understanding of good versus bad quality.  Identified issues are not consistently managed.
Quality measures have been defined for some key data sources.  Specific tools adopted to measure quality with some standards in place. The processes for measuring quality are applied at consistent intervals.  Data issues are addressed where critical.
Data quality is measured for all key data sources on a regular basis. Quality metrics information is published via dashboards etc.  Active management of data issues through the data ownership model ensures issues are often resolved. Quality considerations baked into the SDLC.
The measurement of data quality is embedded in many business processes across the enterprise. Data quality issues addressed through the data ownership model. Data quality issues fed back to be fixed at source.

No comments:

Post a Comment

Tricks

Search JavaScript Kit:
 

This free script provided by
JavaScript Kit