Wednesday, 22 December 2010

Salesforce Security

Salesforce security? Cloud security? Multi tenancy? I can't believe myself I set off writing a blog entry maybe on the hottest discussion topic in the industry nowadays. No, no, not at all, not yet, not yet.

I will rather try to quickly outline for the benefit of the interested reader how security is applied within a Org. That is, security which enables or prevents certain users in the Org seeing the other user's records/data and how users are prevented from seeing certain fields and objects in the Org. So in-house Salesforce 'data' security I should say, not inter-cloud security.

Were you ever interested? We know Oracle and other databases use words like 'permissions' to describe data access. We know applications use words like 'authorisation' or 'authentication' to describe security. Were you ever interested what and how security is accomplished in Salesforce and If you are, keep on reading, else give up now.

Data Security in Salesforce is tricky, different and not something which is similar to anything else. There aren't words like 'authorisation' and 'privileges' in the world. Once a Salesforce or user logs in, 'data access' security kicks in. Somebody can only login if he/she is a salesforce user.  I use and Salesforce interchangeably here, it is the same platform at the end of the day. And the security in this platform described in descending order from big data to small data access is something like this:

1. Object Level Security: This type of security prevents a user form deleting, updating, seeing or creating any instance of the object (record in the table or if you like more old fashioned, tuple in the relation).

2. Field Level Security: In this type of security a user is prevented from seeing, editing and/ or deleting the value for a particular field of an object.

I know you already started visualising a database table. Object, what object?

3. Record Level Security: In this type of security we control data with a little bit more of a 'finesse'. We allow particular users to see an object, but we restrict the individual object records a user can see. A bit like virtual private database (VPD). In Oracle achieved with DBMS_RLS. I wonder how is this done in Salesforce? Salesforce is implemented on an Oracle database anyway. This is hot stuff. Everyone is interested in this nowadays, this means you can slice and dice the 'object' (table) horizontally! How is this done in Salesforce? Well the methods are four.
3.1 Organisational Wide Defaults (OWD): This is the baseline level of access to objects (tables ;-) ) a user has. OWD is defined for each standard or custom (your make) object in Salesforce. OWD in other words is the "baseline level of access that the most restricted user should have". Another name for OWD is "Sharing Model". That is, use OWD to lock-down objects to the most restrictive level.
3.2 Role Hierarchies: Role hierarchy, is the first way we can share access to records per se. OWD is a very 'lock-down' and 'restrictive' thing at the object level, whereas roles is the beginning of opening access to certain records via a hierarchy. The role hierarchy ensures that a manager will always have access to the same data as his or her employees. A polite way of saying your boss owns all your data. That is if you own 10 records in Salesforce, you own them, but your boss owns them as well. Typical, hierarchical record level access control coming down vertically, top to bottom and the 'Top' owns all! You can even assign access through hierarchies on an object-by-object basis as well.
3.3 Sharing Rules: Is when you start making exceptions, and confusing people. Not really! Sharing rules let us make automatic exceptions to Organisational Wide Defaults (OWD) for particular groups of users. As you realised in Role hierarchies section 3.2 above, the Role Hierarchies enable certain group of people to own lower level people's data. Low, down that is. Well Sharing Rules, don't do down, they do left and right. Did you get it? Basically saying that if there are 2 managers in the same hierarchy at the same level, how would they be able to see each others data? Enter Sharing Rules. Sharing Rules enable the access to records across levels in the hierarchy, by establishing certain groups which contain this same level managers, so that they can share their data. Something like Marketing wants to share its records with the IT department, with Role hierarchy this is not possible unless the departments are one 'under' the other and not across at the same level and next to each other in the hierarchy.
3.4 Manual Sharing: What manual sharing in few words is to 'give access to your record, to the guy you like'. No, not exactly but similar. With manual sharing you can grant read or read/write access on records that you own to any other user, role, or public group in Salesforce manually whenever you fancy and like. Although it isn't automatically like in all 3 cases above, manual sharing gives you the flexibility to share particular records. So manual sharing is when you can't do it in any of the 3 ways above. That is when you can't grant access to the record via OWD, Role hierarchies and Sharing Rules, you use Manual Sharing.

This was my attempt to give you a taste of what is data security in We started by letting you see the object, then playing with which fields of the object you can see, and lastly what records of the object you can see.

If you are interested to read more on this topic check ' Fundamentals' by Chris McGuire and Caroline Roth. A very Cloud book.

Thursday, 4 November 2010

Oracle Apex 4.0 UKOUG SIG in London

I attended the UKOUG SIG on Apex 4.0 in London yesterday and learned lots of new things. Thanks to presenters like John Scott from SumNeva and Hillary Farrell from Oracle. Looking forward to receive the seminar notes.

Briefly the things to take home from this SIG meeting for me were:
  • Oracle Apex plugins are great. Great presentation from John Scott on this.
  • In Oracle Apex release 4.0.2, there will be new themes which will render nicely, almost like native apps, on iPhones and iPads.
  • Lots of tips and tricks on how to scale and tune Oracle Apex using tools like, Jmeter, Firebug, Yslow
The Q&A was interesting, as it seems like I am not the only person on earth who thinks that having such an excellent tool such as  Oracle Apex without "Publishing" capabilities, printing in PDF, Word etc, is a bad thing. Why doesn't Apex enable you to print in PDF by default? Why do you have to buy more software from Oracle such as BI Publisher to do this?  I hope I am not sounding stupid. I know there are Free alternatives which enable you to print in PDF such as Apache FOP, but hey, why can't I just be able to print in PDF out of the box?

Specifically, Oracle Apex has a feature which enables you to migrate your Oracle Forms & Reports into Apex. That is, you can say bye bye to your Oracle Forms & Reports fat clients and get your Forms & Reports on the web with Apex, a brilliant free alternative, instead of the expensive Oracle Fusion Middleware stack, the new version of Forms & Reports. Who wouldn't want that? Especially in the age of Cloud computing! What a cool idea!

But there is a snag. You can't just convert your stylish Reports for free.  If you want to carry an Oracle PDF report you created in Oracle Reports, say an invoice with colors and boxed sections, pixel-by-pixel as it is to Oracle Apex, you will have to first convert it in BI Publisher to a "Report Layout" file and then import it in your Oracle Apex workspace. There is no other way to extract the XSLT from your Oracle Report and upload it to Oracle Apex, you will have to go through the BI Publisher route. Prove me wrong, please has anyone found another free and legitimate way?

You might say, with a bit of configuration work you can get Apache FOP to print you PDF output from your Apex reports and regions and you can do that free. You can even create new layouts and print them in PDF using Apache FOP, that is fine. But what if you want to migrate 100 reports, all invoice and billing templates which took developers ages to create? You will realize that to move an existing Oracle Reports report style (XSLT) into Oracle Apex as-it-is, pixel-by-pixel is possible only through BI Publisher. I tried and researched deep and wide on this topic. Is there anybody who managed to extract XSLT from their existing Oracle Reports and create an Apex Report layout?

I like Apex, I like the idea.  But to get the  business and the managers to like it, will have to do a little bit more.

Friday, 22 October 2010

Oracle 11g Streams Synchronous Capture

Have you ever wanted just to maintain a table in two different databases? In two different schemas? Something like replicating, keeping a copy of a table in sync in another database, not in real-time, but almost real time? There is an easy way, use Oracle Streams Synchronous Capture!

Many times in development I get this request, where a developer in a team will come with a request like "Can we get a copy of that table in our schema in some sort of sync please?". Alternative ways of providing the developer with this table would be backup/restore, data-pump, both not so much "in sync" and overkill solutions. There is a better way, Oracle Streams Synchronous Capture.

Oracle Streams is a tool which propagates information between databases using Advance Queuing in almost real-time. It has 3 main components Capture, Propagate and Apply. Capture is the process with which you capture the data in the source database. Propagate is the process which transfers the data from the source database to the target database. The Apply process copies (inserts) the data into the target database, once the data arrives to the target database. Oracle Streams is easy to setup and use, most important of it all, it is FREE. Yes, Oracle Streams would be the perfect candidate to have data available in two or more database systems in sync for whatever reason, be it development tests, production replication between sites or whatever legitimate reason you want a copy of your data to exist somewhere else as well.

You might argue, there is something fundamentally wrong with asking to have a copy of the data. That is, the data twice, three times and even more, in different places. The action of having the copy of data in different places not in real-time, with delay, might dilute the truth and might even version the truth. When all we want is just "one" truth, one real and correct answer. Oracle streams usually will have two ends, the source database and the target database. There is nothing stopping us from changing the data of the target database after it is streamed down from the source. I will probably agree with you if you are thinking like this. Nevertheless, Oracle Streams still proves useful.

Reading the Oracle documentation on Oracle Streams you can see that although there are multiple ways to configure Oracle Streams, there seems to be only 2 kinds of Oracle Streams. The Asynchronous Redo Log Capture based Streams and the Synchronous Capture "internal-mechanism/trigger" based Streams. Both have their uses. The former is the heavy weight type, where all changes on your source database objects are captured from the Redo Logs of the source database and then they are propagated in the Redo Logs of the target database where the Apply process picks them up and applies them. A system suitable if you want to stream everything, including DML and DDL, between the source and target databases, or more just than few tables. On the other hand, if you just want to stream only the DML of few tables between databases then you can use Oracle Streams Synchronous Capture, where triggers are used to capture the changes as they happen in the source database and then propagate the changes to the target database,no redo log mining is used for this. This is exactly what this post is all about, using Oracle Streams Synchronous Capture to replicate few tables between databases, a New Feature of Oracle 11g, which doesn't rely on archivelogs or redo logs to propagate DML changes on tables between databases.

Below is a step-by-step implementation of Oracle Synchronous Capture in Oracle 11g. I use two database instances SOURDB for the source database and TARGDB for the target database. I create two Oracle Streams administrator schemas on these databases first. Then I create the Oracle Streams users, the table and the capture, propagate and apply processes. The PL/SQL package DBMS_APPLY_ADM is used, although if you read the documentation you will see that there are other packages which you can use.

Step by step implementation

1. Create U_SOURCE users to be used for the test on both databases. Yes on both systems the users is called U_SOURCE, as it has to be identical for the following example to work. You can have different usernames, but then you have to create a transformation process, look in the documentation you will understand. I didn't want this blog entry to take forever, I took a shortcut by naming both schemas in both databases SOURDB and TARGDB as U_SOURCE.

-- on SOURDB create user
connect / as sysdba

create user u_source
identified by u_source
default tablespace users
temporary tablespace temp;

grant connect,resource to u_source;

-- db link to connect to target

-- on TARGDB create user

connect / as sysdba

create user u_source
identified by u_source
default tablespace users
temporary tablespace temp;
grant connect,resource to u_source;

-- db link to connect to source

2. Now create the Streams Administrator users on both SOURDB and TARGDB databases, with their own STREAMS tablespace. Run the script below on SOURDB as well as TARGDB as SYSDBA.


CREATE USER strmadmin IDENTIFIED BY strmadmin

grant dba to strmadmin;

3. Now is time to create the Capture process in the SOURDB. We use Oracle AQ as the structure to capture DML changes. That is why first we need to create the queue and then add the table which we want to capture the changes from. Also, in synchronous capture you don't have to start the capture process, it starts automatically once you create the table rule.

--in SOURCEDB as user STRMADMIN set up the queue
--if you want to remove queue use
--exec dbms_streams_adm.remove_queue('strmadmin.SOURDB_queue',true,true);
queue_table => 'strmadmin.SOURDB_queue_table',
queue_name => 'strmadmin.SOURDB_queue',
queue_user => 'strmadmin');

4. Now still as the STRMADMIN user in the SOURDB which owns the above queue, you can add the table from which you want to capture the changes like this:

table_name => 'U_SOURCE.T1',
streams_type => 'SYNC_CAPTURE',
streams_name => 'SYNC_CAPTURE',
queue_name => 'strmadmin.SOURDB_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
inclusion_rule => true,
source_database => 'SOURDB');

5. Still logged in as the STRMADMIN user in SOURDB, create the propagation process between SOURDB and TARGDB.

--To drop existing propagation

--To create a propagation between SOURDB to TARGDB
table_name => 'u_source.t1',
streams_name => 'SOURDB_propagation',
source_queue_name => 'strmadmin.SOURDB_queue',
destination_queue_name => 'strmadmin.TARGDB_queue@TARGDB',
include_dml => TRUE,
include_ddl => FALSE,
include_tagged_lcr => FALSE,
source_database => 'SOURDB',
inclusion_rule => TRUE,
queue_to_queue => TRUE);

6. On the target database TARGDB create the Apply process, again first you will have to create the queue and then the Apply process.

--exec dbms_streams_adm.remove_queue('strmadmin.TARGDB_queue',true,true);
queue_table => 'strmadmin.TARGDB_queue_table',
queue_name => 'strmadmin.TARGDB_queue',
queue_user => 'strmadmin');

--now create the apply process
queue_name => 'strmadmin.TARGDB_queue',
apply_name => 'sync_apply',
apply_captured => false); <-- very important to be set to FALSE! end; / --set Apply process rule, which table etc.. begin dbms_streams_adm.add_table_rules( table_name => 'U_SOURCE.T1',
streams_type => 'APPLY',
streams_name => 'SYNC_APPLY',
queue_name => 'strmadmin.TARGDB_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'SOURDB');

7. Once you have created the Apply process on the target database, you will have to instantiate the target database 'replica' table with the SCN number. This creates a point in time on the target table which shows the point the target table started accepting changes from the source database table. So, in the target database TARGDB as the user U_SOURCE do the following:

select dbms_flashback.get_system_change_number apply_scn from dual;

insert into u_source.t1
(select *
from u_source.t1@SOURDB as of scn 13983123089);


--Instantiate, point of no turning back!
source_object_name => 'u_source.t1',
source_database_name => 'SOURDB',
instantiation_scn => 13983123089);

--Confirm the instantiation is complete
select source_database,
source_object_owner||'.'||source_object_name object,
from dba_apply_instantiated_objects;

--Start the Apply process, from this point onwards all changes in source table T1 are
--propagated to the target table T1.

8. Now, you can test the setup by going to the SOURDB U_SOURCE schema and inserting some rows to the table T1 and coming back to the TARGDB U_SOURCE schema and seeing the changes being replicated. If everything went alright, your DML will be propagated across from source to target, as a logical chanage record (LCR) as the Oracle Streams parlor denotes it.

For more reading and monitoring of the Oracle Streams operation go to Monitoring a Capture Process You can also use Oracle Enterprise Manager to monitor Oracle Streams. Unfortunately although you CAN setup the first type of Oracle Streams using Redo Logs with Enterprise Manager, you CAN NOT setup Oracle Streams Synchronous Capture.

Saturday, 11 September 2010

Web services

I have been going around websites/wikis to find a definition for, what is a webservice? Especially I was interested to find out the kind of web services, which of course, have to do with databases, Oracle in particular. I spend the whole day looking at certain websites and here is my DIY one-liner definition:

Web services is all about give and take data over HTTP. When you give data you are a 'provider' of a web service and when you take data you are a 'consumer' of a web service.

Oracle 11g is doing both.

You can turn your Oracle database into web service provider, a source of data for the internet where people can come to a certain URL (WSDL) and be able to query data that you keep in your database, just using HTTP. The visitors to this URL will run your database PL/SQL procedures, functions and packages using this URL and then they will be able to read the results. This setup is called being a provider of a web service.

On the other hand, being a consumer of a web service in Oracle, is being able to write SQL or PL
/SQL which makes external calls from your Oracle database to other web service providers' services on the internet. I have used this type of webservices in my blog. An example of such a web service is when you make an HTTP_REQUEST to the Google Maps API or other sources such as Yahoo and PayPal which themselves are providers of web services. See this demo app where I use Google Maps and API


When your database makes a call to an external web services then you become the consumer of  a web service.

A platform to try both, with no hassle, of course is Oracle Apex. The installation of Apex in Oracle 11g will enable you to use web services in both ways described above.

The best place to start is Chapter 33 of Oracle XML Developers Guide. Here you will find how to setup and get Oracle 11g to use Native Oracle XML DB web services. Once you set up, all you have to do really is create a PL/SQL procedure or function and the WSDL is automatically generated and saved in Oracle for you. All you have to do then is just run the procedure from the URL via a little program which will call this web service, yes you can use 'wget' or any other command line script which will make a call to web service.

The next link is a good blog entry which describes step by step how to setup Native Oracle XML DB Web Services  and how to Access PL/SQL stored Procedures Using a Web Service in Oracle. The post is called the  First Tests of 11g native web services by Paul Gallagher I really liked Paul's post, is concise and straight to the point. It even has a perl script which tests to see if your web service provider database setup works.

Other related posts on the topic of Oracle web services are:

Create a Native Database Web Service by Marco Gralike

Monday, 21 June 2010


This is a post about Oracle hierarchical queries. The other day I was after a solution which would display the hierarchy in the execution of some PL/SQL procedures. That is, there was  a main procedure which would call several dozens of other procedures in a predefined order. This order was pre-recorded in a table. What I was after was the ancestral relationship in the order of executions. To demonstrate simply, say there is table called steps, which looks like this:

MAIN_ID                DEPENDANT_ID           
---------------------- ---------------------- 
1                      2
1                      3   
2                      4
2                      3

Process MAIN_ID 1 depends on the execution of 2 and 3 first and then 2 depends on the execution of 4 and 3. Can we execute 1 after we execute 2 and 3?

Well the table says YES!. But how about ancestry? We can clearly see that for process 2 to execute process 4 must execute first! Wouldn't it be nicer, if we could show this ancestry relationship via an SQL query straight out of this table? An SQL query which would show us all dependants of 1 and 2, including parents, granddads etc...

So the results we are after could look like this:

---------------------- --------------
1                      2,4,3  
2                      3,4 

Here is the query to obtain such hierarchical relationships out of this table by using the CONNECT_BY_ROOT, Oracle 10g operator and some XMLAGG function magic for hierarchical queries:

select main_id, rtrim(xmlagg(xmlelement(d, dependant_id||',')).extract ('//text()'),',') depends_onthis 
SELECT  distinct connect_by_root main_id main_id,
  FROM steps
    CONNECT BY prior dependant_id=main_id
  ORDER BY 1,2
group by main_id
order by main_id

---------------------- --------------
1                      2,4,3  
2                      3,4 

I found this new Operator in Oracle 10g amusing and cool. Here is the Oracle documentation:

And here is the test table scripts if you want to run the above test case for yourself:

drop table steps;

create table steps
main_id number,
dependant_id number

insert into steps values (1,2);
insert into steps values (1,3);
insert into steps values (2,4);
insert into steps values (2,3);

Thursday, 1 April 2010

Materialized View Refresh and the ATOMIC_REFRESH parameter

When refreshing big materialized views in large data warehouses it is always good to check the parameter options available in the DBMS_MVIEW.REFRESH procedure.

Oracle changes the default parameters of its DBMS packages from release to release. I remember back at the times of Oracle 9i a complete refresh would truncate the materialized view, thus the only work that the database was actually doing in a complete refresh, was just an INSERT after the TRUNCATE.

Now in Oracle 10g and Oracle 11g parameters have changed. When there is a COMPLETE materialized view refresh, for the purposes of data preservation, a DELETE is done instead of a TRUNCATE!

The reason for this is because Oracle "changed" the default parameter value of ATOMIC_REFRESH in the DBMS_MVIEW.REFRESH package. In earlier releases the parameter was set to FALSE by default but now it is set to TRUE, which forces a DELETE of the materialized view instead of  TRUNCATE, making the materialized view more "available" at refresh time.

But this DELETE is an expensive operation in terms of refresh time it takes. A DELETE is always expensive and sometimes even impossible when we are talking about the complete refresh of materialized views with millions of rows. At COMPLETE refresh a DELETE command takes long time, as it has to make the materialized view available to the users and comply with the ACID theory and its transactional reasons, but who cares? What if we really want to get the refresh done quickly and we don't care about the availability of the materialized view, say in a data warehouse maintenance window?

I have tested how the refresh time improves when you set the ATOMIC_REFRESH => FALSE in the procedure on a 1 CPU home computer with Oracle 11g. If you have more CPUs it will even be quicker because of the PARALLEL parameter.

I just wanted to show the improvement you get in refresh times when ATOMIC_REFRESH is FALSE and Oracle does a  TRUNCATE instead of a DELETE at complete refresh. A real time saver with big materialized views in data warehouses. Here is the test:

-- Enable parallel DML

-- Cleanup

-- Create the Test table
    X  VARCHAR2(255),
    Y  VARCHAR2(255)

-- Make the test table paralllel

Load the test table with data from a dummy cartesian join so that you get millions of rows. Do you like my loop? Loops only once. Well I had it there to add more millions if I wanted to :-) >

  FOR I IN 1 .. 1
    INSERT   /*+ PARALLEL(BASE_TABLE, 5) */    INTO BASE_TABLE SELECT ROWNUM, a.object_name, a.status  FROM all_objects a, emp b, dept c;

3010392 rows inserted

Create a materialized view log for fast refresh


Create the fast refresh materialized view

CREATE materialized VIEW mv_base_table parallel 5 refresh fast

Update the Test table to simulate changing data


3010392 rows updated

Now you are ready to do the materialized view refresh with the ATOMIC_REFRESH values set to TRUE and then to FALSE. Observe the refresh times. Refresh the materialized view with the two different values in the 

TRUE case with DELETE


Elapsed 558.8 seconds

Now is time to do the test with the ATOMIC_REFRESH parameter set to FALSE.

Must update again to simulate change in the logs



Elapsed 215.3 seconds

Half the time! Well if you have real computers it will be much faster. So again we see DELETE is really not good when you work with millions of rows. Nice to have options!

Monday, 1 February 2010

Migrate your Oracle 10g database to ASM

This blog entry is about migrating your Oracle 10g database to ASM using OEM Database Control on ubuntu 9.10 Karmic Koala. To be able to do this you must have:
  • CSS service running on your ubuntu box
  • ASM instance running
  • OEM Database control running
To resolve your problems of starting the Oracle CSS daemon and ASM instance running on Ubuntu Karmic Koala, follow the resources I give below.

You will have to do some hocus-pocus with some parameters to adjust memory, improvise disks with dd (cause I only had one) and CSS script startup hick-ups. Insist with the resources and and oraclebase kindly provide and you will be running ASM on single instance on ubuntu 9.10 Karmic Koala in no time.

You should have the above three ready before you attempt to migrate your databases to ASM. You must go through the trouble and create an ASM instance with DBCA as per the resource below. No ASM instance no migration!

Once you are ready there only 4 steps to follow on the OEM and they are easy!

About ASM

For years DBAs and SAs have been looking over spindles and trying to decide where to put what. Which files shall I put on which disk? RAID this RAID that and so on...

What is the best way to tune I/O for Oracle?

The answer is, easily enough, Let Oracle tune I/O for you with their SAME (Stripe and Mirror Everything) approach. More, the mirroring and striping is done at the extent level and file level and not disk level.

That is exactly what ASM does, it balances I/O activity. It is an Oracle instance which runs alongside your database instances, in memory, and balances the I/O of database data files. It balances and tunes I/O whenever you add disks or you drop them, even online with no intrusion!

All your Oracle datafiles now are controlled by ASM and the way it does it is that ASM makes sure that a file is evenly spread across all disks when the file is allocated, so rebalancing is not required. 

All this sounds very good, if we don't care about I/O tuning what will happen to SQL Tuning? Are we going to do structural and logical tuning only?

Further, with emergence of "Oracle TimesTen In memory" database technology where the whole database can be loaded easily in solid state memory (Flash Disks), I wonder how long will ASM last?


Tuesday, 19 January 2010

Salesforce and

I like and Both I think are real pioneers in the terra-incognita, or maybe I should call it the nimbus-incognita, which is called the "Cloud-platform". Both are very easy to use SaaS (Software as a Service) platforms. The applications are based on solid data modelling principles and the back-end relational database used is a massive parallel implementation of Oracle databases around the world. You can check their systems availability and the number of transactions done per day here

Although salesforce insists that their applications are stored in objects which are not "just database tables", their first step in creating an application involves capturing the data model. This is really a very "database" platform.

The platform offers a solid 2x2=4 environment to build and host applications and all work is done with a browser. That is, their IDE is the browser. They even provide you with a sandbox for your unit tests and and Eclipse plug-in for code access to the platform. The IDE is shared between their CRM platform and their developer platform In the platform you are able to extend a complete and very well designed CRM data model to include anything you wish. In the platform you can create applications from scratch about anything! Among the two, is what impressed me the most and made me to write this blog post. There is not much to be done in the platform, as is a complete working CRM application in itself which would suit any size company which needs a good CRM system. on the other hand is a blank canvas. As long as you now what you want, you can start building it straight away. You start with the data model, then you determine your objects, and their relationships and once you have layered your data model in your system, the platform straight away provides you with the means to update and insert records in the back-end database. It automatically builds forms and reports. It also provides you with work-flow and automatic callendaring and emailing functions. Plus, it has a coding language called Apex, which is Java like with classes and triggers enabling you to enhance your application logic. It also has a very HTML like markup language called Visualforce. If you want to migrate your legacy data into the platform you should look at cloud ready ETL and data integration tools like Informatica (£££) and Talend Open Studio  (free).

A project I recently have done in Kizoom using this platform which took me 82 hours that is 10 days approximately to complete. The project was migrating all data and application logic from a legacy database system to During my work, data integration tools like Informatica and Talend helped me to move the data quickly into the data model, and the already built in features of provided me data entry forms and advanced reports. Because all this is in the cloud I was able to easily utilise geolocation with Google Maps API. The legacy system was a Microsoft SQL Server online database of fault records of computers in publicly available street kiosks around UK. made the application available in the cloud which meant the UK-wide field engineers of the company were able to easily login and access it on the road without having to use the company network (VPN) but just the cloud.

Below is a snapshot of the app.


The platform is very capable, easy to use and very intuitive. It has tremendous potential and achieves good economics in the development efforts. One shouldn't be shy to use it.

Monday, 4 January 2010

SQL Tuning Advisor and SQL Access Advisor in Oracle 10g

Oracle 10g with its AWR (Automatic Workload Repository) for automatic collection of performance statistics and with ADDM (Automatic Database Diagnostic Monitor) for identifying root causes of problems, is a self-healing and self-tuning getting wiser every day and very soon will start to think on its own database.

There are features in Oracle 10g which help you to quickly find out what is wrong with the database. Features like ASH (Active Session History) to peak into what your active sessions are waiting for and Server Generated Alerts with Metrics and Thresholds are all tools which tell you what is really going on in your database. The OEM (Oracle Enterprise Manager) Database control is a brilliant interface you can use to learn and find out more about all these smart enhancements. All in the name of pro-active database management.

What is database well-being all about anyway? Isn't it high throughput(OLTP) and response time(DSS)? Aren't all databases working to achieve these?

Oracle 10g has also an Advisory Framework which based on workload, will also try to advise you on configuration issues like Buffer Cache, Library Cache, PGA, Undo and Segments used. There are also Tuning-Related Advisors which deal with SQL Tuning and these are:

I - SQL Tuning Advisor
II - SQL Access Advisor

In this post I will try to explain the usage of these two advisors in the job of tuning bad and nasty SQL statements.

All automation is good, self healing databases are very good. But what happens in the development arena when a developer turns up to your desk pointing to an email he/she just sent to you and says '...what's wrong with this SQL statement? it takes ages? Why is Oracle so slow?...' I am particularly fond of the last comment.

How can we quickly look at what is wrong with the SQL statement using the SQL Tuning Advisor? Here is how.

I - SQL Tuning Advisor

Using the DBMS_SQLTUNE package to tune SQL

Assuming the suspect query is something like "SELECT * FROM SESSIONS, SOURCE_EVENTS", two very large tables, and your schema has the ADVISOR privilege granted, you would put this query into the Oracle SQL Tuning Advisor engine using the PL/SQL package DBMS_SQLTUNE like this:

1. Create the tuning task

In SQL*Plus and in the schema where objects live, use the procedure dbms_sqltune.create_tuning_task to create a tuning task.

declare tname varchar2(30);
tname := dbms_sqltune.create_tuning_task(sql_text=>'select * from sessions, source_events');

PL/SQL procedure successfully completed.

Then you can check to make sure that your task is created and get the TASK_NAME value as you will need this in the next step. I could name the task, just being lazy...

SQL> select task_name from user_advisor_log;


2. Execute the tuning tasks

Execute the tuning task you created in step 1 with the procedure dbms_sqltune.execute_tuning_task

SQL> exec dbms_sqltune.execute_tuning_task('TASK_51415');

3. Get the tuning report

Get the tuning report/recommendation with the following SQL.

SQL> set long 50000
SQL> select dbms_sqltune.report_tuning_task('TASK_51415') from dual;


Tuning Task Name : TASK_51415
Tuning Task Owner : ACHILLES
Time Limit(seconds) : 1800
Completion Status : COMPLETED
Started at : 12/20/2009 18:44:05
Completed at : 12/20/2009 18:44:14
Number of SQL Restructure Findings: 1

Schema Name: ACHILLES
SQL ID : 8qsdf0a6qv4g7
SQL Text : select * from sessions, source_events


1- Restructure SQL finding (see plan 1 in explain plans section)
An expensive cartesian product operation was found at line ID 3 of the
execution plan.

- Consider removing the disconnected table or view from this statement or
add a join condition which refers to it.

A cartesian product should be avoided whenever possible because it is an
expensive operation and might produce a large amount of data.


1- Original
Plan hash value: 3883039950

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 33T| 4158T| 4367M (2)|999:59:59 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 33T| 4158T| 4367M (2)|999:59:59 | | | Q1,01 | P->S | QC (RAND) |
| 3 | MERGE JOIN CARTESIAN | | 33T| 4158T| 4367M (2)|999:59:59 | | | Q1,01 | PCWP | |
| 4 | SORT JOIN | | | | | | | | Q1,01 | PCWP | |
| 5 | PX RECEIVE | | 8439K| 829M| 3175 (2)| 00:00:39 | | | Q1,01 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10000 | 8439K| 829M| 3175 (2)| 00:00:39 | | | Q1,00 | P->P | BROADCAST |
| 7 | PX BLOCK ITERATOR | | 8439K| 829M| 3175 (2)| 00:00:39 | 1 | 16 | Q1,00 | PCWC | |
| 8 | TABLE ACCESS FULL| SESSIONS | 8439K| 829M| 3175 (2)| 00:00:39 | 1 | 16 | Q1,00 | PCWP | |
| 9 | BUFFER SORT | | 4013K| 122M| 4367M (2)|999:59:59 | | | Q1,01 | PCWP | |
| 10 | PX BLOCK ITERATOR | | 4013K| 122M| 532 (4)| 00:00:07 | 1 | 16 | Q1,01 | PCWC | |
| 11 | TABLE ACCESS FULL | SOURCE_EVENTS | 4013K| 122M| 532 (4)| 00:00:07 | 1 | 16 | Q1,01 | PCWP | |

1 rows selected

Well done SQL Tuning Advisor, a good answer. It immediately spotted the Cartesian product and it recommends with a rationale that it would be expensive!

II - SQL Access Advisor

SQL Access advisor is another Advisory Framework tool which provides primarily advice on the creation of indexes, materialized views, materialized view logs to improve query performance and response time.

Both SQL Tuning Advisor and SQL Access Advisor tools are quite powerful as they can source the SQL they will tune automatcially from multiple different sources, including SQL cache, AWR, SQL tuning Sets and user defined workloads. Look at the Oracle documentation to find out how all this is possible under OEM Database Control.

What I focus on here is how to quickly pass through these tools ad-hoc SQL that the developer will bring to you and will demand to be tuned there and then.

So using the QUICK_TUNE procedure from the DBMS_ADVISOR package here is how you would tune a nasty single SQL statement for better data access paths with SQL Access Advisor.

1. Create tuning task

In the schema where the table is run the following

VARIABLE sql_stmt VARCHAR2(30);
EXECUTE :sql_stmt := 'select count(*) from sessions where short_code=''abracadambra'' ';
EXECUTE :tname := 'myquicktunetask';
EXECUTE DBMS_ADVISOR.quick_tune(dbms_advisor.sqlaccess_advisor, :tname, :sql_stmt);

2. Check to see the task is there

Again to see the task myquicktunetask which you just created run in the schema

SQL> select task_name from user_advisor_log

2 rows selected

3. See tuning advice

Now, to see the advice you get you will have to look in the DBA dictionary view
. So login as DBA and filter for the task_name='myquicktunetask' in the DBA_ADVISOR_ACTIONS dictionary view. Well, you can view the advice in the USER_ADVISOR_ACTIONS dictionary view as well.

SELECT command,
attr2 ,
attr3 ,
attr4 ,
FROM dba_advisor_actions
WHERE TASK_NAME='myquicktunetask'

(CLOB) myquicktunetask|GATHER TABLE STATISTICS||-1||


Well I find both tools extremely useful. Both are good starting points in resolving SQL tuning issues and discussions. SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics and the SQL Access Advisor does suggest good data access paths, that is mainly work which can be done better on disk.