Quantcast
Channel: DB2 Settings – db2commerce.com
Viewing all 39 articles
Browse latest View live

DB2 LUW – What is a Page?

$
0
0

The logical view of a database consists of the standard objects in any RDBMS – Tables, Indexes, etc. There are a number of layers of abstraction between this and the physical hardware level, both in the OS and within DB2.

Setting the Page Size

The smallest unit of I/O that DB2 can handle is a page. By default, the default page size is 4096. The default can be set to one of the other possible values when the databases is created using the PAGESIZE clause of the CREATE DATABASE statement.

The possible values for page size in a db2 database, no mater where it is referenced are:

  • 4K or 4,096 bytes
  • 8K or 8,192 bytes
  • 16K or 16,384 bytes
  • 32K or 32,768 bytes

Realistically, the page size is set for either a buffer pool or a tablespace. Setting it at the database level on database creation just changes the default from 4K to whatever you choose and changes the page size for all of the default tablespaces and the default bufferpool.

The page size for each bufferpool and tablespace is set at the time the buffer pool or tablespace is created, and cannot be changed after creation.

Tables can be moved to a tablespace of a different page size after creation using the ADMIN MOVE TABLE command, but that operation requires at the very least an exclusive lock, and may not support RI – I hear RI support is added in 10.1 Fixpack 2.

Choosing a Pagesize

In my experience, it is rare to have a database created with a different default page size. Every database I currently support has the default page size of 4K, and also has at least one tablespace and one bufferpool with each of the other page sizes.

The most common time you think about page sizes is when you’re creating a table. When DB2 stores data in a table, it CANNOT have a row size larger than the page size minus some overhead. So if you have a row greater than 4,005 bytes in width, you simply cannot keep it in a tablespace with a page size of 4K. The row size does not include large LOBs that you do not wish to in-line. But it does include the maximum value of every varchar field.

This is one area where DB2 differs from Oracle. To my knowledge, in Oracle, you can have a row that spans multiple pages. From what I hear, DB2 is planning to support that in a future release, but they’re planning to do it by plopping the rest of the row in a LOB – there was an audible groan when they announced that in a conference session I was in, due to the other problems in dealing with LOBs.

It is also important to think of the geometry of the table when choosing a page size. If you have a row size of 2010 bytes, that means that only one row will fit on every 4K page, and therefore nearly 50% of the space allocated to your table will be empty and wasted. A row size that large would do much better on an 8K or 16K or even 32K page. It is important to consider this for every table as you create it, and to revisit it with any column length alterations you make.

I have, on several different occasions, had a request from a developer to increase a column size, and have had to move the table to a different tablespace to accommodate the change because the increase pushed the row over the limit. Historically, moving a table between tablespaces has required the table to be offline – which can be problematic in a live production database. If you’re properly testing changes in at least one development environment, you will discover these kinds of issues before they get to production.

While page overhead is not exactly 100 bytes, it is usually not far from it, so to determine how many rows will fit on a page, you can usually use:

(pagesize-100)/maximum row length

Again, this does not count LOB data, but only LOB descriptors. LOB descriptors are stored on the data page. The remainder of the LOB is stored in another location, assuming you have not INLINED the LOBs. From the standpoint of a page, the main reason for using a LOB is to allow a large portion of unstructured data to be stored elsewhere – not directly on the data page. LOB descriptor sizes on the page depend on the size of the LOB and vary from 68-312 bytes.

Generally, smaller pages are preferable for OLTP and e-commerce databases because they allow you to handle less data at a time when you’re expecting smaller queries.

The total table size is another factor in choosing a page size. New tablespaces should generally be created as “LARGE” tablespaces. But “REGULAR” used to be our only option, and for REGULAR tablespaces with a 4K page size, the limit on table size in a DMS tablespace is just 64 GB (per partition). On more than one occasion I have dealt with hitting that limit, and it is not a pleasant experience. For LARGE tablespaces, the limit per partition for a 4K page size is 8 TB – much higher.

Since pagesize is set at the tablespace level, you can also consider the appropriate page size for your indexes, assuming you’re putting them in a different tablespace than the data. While you cannot select a tablespace for each index, but only an index tablespace for the table as a whole, you’ll want to consider possible indexing scenarios when chossing where your indexes go as well. The limit for index key (or row) size in db2 9.7 is the pagesize divided by 4. So for a 4k page size, it is 1024. Back on DB2 8.2, the limit was 1024 for all page sizes.

An interesting side note: if you’re not familiar with the “SQL and XML limits” page in any DB2 Info Center, I recommend looking it up and becoming familiar with it. That’s where I verified the index row limit and there are all kinds of interesting limits there.

Data Page Structure

Ok, this section is admitedly just a tad fuzzy. I had the darndest time getting information on this, even reaching out to some of my technical contacts at IBM. But I’m going to share what I do know in case it helps someone.

Each data page consists of several parts. First is the page header – a space of 91-100 bytes reserved for information about the page. The page header identifies the page number and some other mysterious information I cannot find much detail on.

Next comes the slot directory – which is variable in size, depending on how many rows are on the page. It lists RID’s that are on the page and the offset on the data page where the record begins. if the offset is -1, then that indicates a deleted record. In the structure of the rows themselves on the page, it appears that the records “start” at the bottom of the page. There may be open space between records due to any of the following:

  • Deleted records
  • Reduced size of VARCHAR values
  • Relocation of records due to increased size of the VARCHAR that will no longer allow the row to fit on the page

This open space cannot be used by additional records until after a reorg.

Finally, there may be continuous open space on a page that is left over or simply not used due to either deletes followed by reorgs or due to the pages simply not being filled yet.

DataPage

I found some references to each page also having a trailer, but they were not from sources I could consider credible, so there may or may not be a trailer on the page. Most of the information here comes from a page in the IBM DB2 Info Center. I would love to hear reader comments on this topic, or any references anyone may have with more detailed data.

Not every page in a table is a data page dedicated fully to user data. There is one extent of pages allocated for each table as the extent map for the table. Past a certain size, additional extent map extents may be required. There is also a Free Space Control Record every 500 pages that db2 uses when looking for space to insert a new row or move a re-located row.

Index Page Structure

Indexes are logically organized in a b-tree structure. Interestingly, the RIDs that index pages use are table-space relative in DMS tablespace containers, but object relative in SMS tablespace containers – perhaps this is one of the reasons there has never been an easy way to convert from SMS to DMS or vice-versa.

I have not been able to find a good representation of exactly what leaf and non-leaf pages look like for indexes. We do have the standard representation of pages in a b-tree index, that is:
b-tree

This shows us that we can expect to see index keys that delimit ranges on the root and intermediate non-leaf pages, and that on the leaf pages, we expect to see the index keys along with the RIDs that correspond to those index keys. There is still a page header that is between 91 and 100 bytes, But I don’t know if index leaf pages have the same slot directory that data pages do. Again, I welcome user comments and links on this topic.

Extent Size and Prefetch Size

The extent size and prefetch size are specified on a tablespace by tablespace basis(keywords EXTENTSIZE and PREFETCHSIZE on the CREATE TABLESPACE command) or as database-wide defaults (DFT_EXTENT_SZ and DFT_PREFETCH_SZ db cfg parameters). Extent sizes cannot ever be changed after tablespace creation. Prefetch sizes can be set to AUTOMATIC and be changed automatically by DB2, or can be altered manually using the ALTER TABLESPACE command. Both are specified as a number of pages.

The Extent size is the number of pages that are allocated to any objects in the each tablespace container at one time. The Prefetch size is the number of pages that are read into the bufferpool by the prefetchers for one read request. I’m not going to speak specifically to the tuning of these in this post.

Summary

By understanding pages, we come closer to understanding how DB2 handles some aspects of I/O. Minimally, a DBA needs to be able to pick the appropriate page size for a given table and it’s indexes.


Two Worlds of DB2 LUW Performance Monitoring

$
0
0

I generally suggest both my readers and my clients turn on the monitor switches using the DFT_MON* parameters in the DBM CFG. However, I find myself using traditional snapshots less and less. The main time I still use them is when I’m panicked and my older training kicks in. But thinking back today, the only time in the last month that I used a “GET SNAPSHOT” was when working on a 8.2 database (which is still supported by IBM when it is in conjunction with WebSphere Commerce).

Two Worlds

You can think of the two monitoring methods as the “old” way and the “new” way of accessing performance monitoring data. Starting in earnest in DB2 9.7, IBM started to introduce the MON_* table functions and views as a lighter weight methodology for monitoring DB2. The IBM DB2 Info Center refers to them as simply “Monitoring Routines and Views”. IBM describes them as having less impact on the database being monitored, and also as IBM’s strategic direction for database monitoring.

The older methodology is referred to as “Snapshot Monitoring” – you used to have no choice but the GET SNAPSHOT command. SQL methods have been introduced over the years, so there are other ways to get to the data. I fully expect that in some version IBM will announce the deprecation and finally discontinue snapshot monitoring.

Today is what it looks like while we’re in transition between the methods – there is more functionality added to the monitoring routines and views with Fixpacks of 9.7. And it takes those of us with significant experience with the older method some time to move over to fully using them. We try to make old stuff work with the new methodologies. If there was just a “RESET MONITOR SWITCHES” function, some would take it up quicker. I imagine there are technical reasons that there isn’t.

Also, I’m slow to the game. Maybe SAP certifies a new version of DB2 within 9 weeks, but IBM WebSphere Commerce? I’m lucky if they do within TWO YEARS, and even luckier if they don’t require my clients to buy expensive separate DB2 licenses to do it. I hear they’re working on that, but have yet to see DB2 10.1 be allowed with any version of WebSphere Commerce, and 10.1 has been out for well over a year now. If you are experienced and already know all this new monitoring methodology inside and out, just comment and help a fellow DBA out. Perhaps something more cutting edge should be on my wishlist the next time I change jobs.

Monitor Switches and Snapshot Monitoring

The monitor switches control what data db2 collects for the older snapshot monitoring interface. Yes, they can be turned on at the command line for a particular session, but if you set the default settings for them in the DBM CFG, then you’ll have DB2 collecting the data you’ll need if you run into a performance problem. To check them:

$ db2 get dbm cfg |grep DFT_MON
Buffer pool                         (DFT_MON_BUFPOOL) = ON
Lock                                   (DFT_MON_LOCK) = ON
Sort                                   (DFT_MON_SORT) = ON
Statement                              (DFT_MON_STMT) = ON
Table                                 (DFT_MON_TABLE) = ON
Timestamp                         (DFT_MON_TIMESTAMP) = ON
Unit of work                            (DFT_MON_UOW) = ON

To set them, use:

db2 update dbm cfg using DFT_MON_BUFPOOL ON DFT_MON_LOCK ON DFT_MON_SORT ON DFT_MON_STMT ON DFT_MON_TABLE ON DFT_MON_UOW ON HEALTH_MON OFF IMMEDIATE

I included the setting for turning the Health Monitor off. I don’t use it. If you don’t use it, turn it off – it can cause performance problems.

Changes to these parameters take place online, as long as you are attached to the instance and use the “immediate” keyword.

What does turning these on get you? It gets you data for all fields in the “GET SNAPSHOT” output, for all SYSIBMADM views, and for most monitoring table functions that do not start with “MON_”. That includes things like SYSIBMADM.SNAPDB. These can be useful as a transition.

With this older methodology, you can issue the command “RESET MONITOR SWITCHES” and reset the counters for a particular session. The most useful aspect of this is to have a script that connects to a database, resets the monitor switches, sleeps for an hour (or some other amount of time)and then takes snapshots to files. This lets us get data that we know is only for a very specific period of time – though the dynamic SQL snapshot was always exempt from that methodology. I still capture data that way on most of my databases – as an emergency backup since some of my newer scripts still have bugs that I’m working out.

Looking at Data Elements to Determine Which Switch Must be on for Data to be Collected

Sometimes you have a particular area or a particular metric that you are especially interested in. Using the IBM DB2 Info Center, it is easy to see which switch must be on to collect data for a particular metric. Simply search to find the page on that metric, and you’ll get something that looks like this:
info_center_mon_ele
Click on the image to go to that page in the IBM DB2 Info Center if you like. Notice in Table 2, the right hand column lists the name of the monitor switch that must be turned on for this metric to be collected.

Remember, that the monitor switch must be turned on before any issue happens for DB2 to collect data about that issue. If you plan on making extensive use or even backup emergency use of the snapshot monitors, it is a good idea to have all of the monitor switches on by default.

Newer MON_* Monitoring Routines and Views

I’ve written a number of posts on these:
My New Best Friend – mon_ Part 1: Table Functions
My New Best Friend – mon_ Part 2: Views
My love affair with them continues.

One of their chief disadvantages has been that they always record data since the last database restart, and there is no way to reset monitor switches or limit data to a specific time period. I’ve said it before, and I’ll say it again, my favorite developerWorks article in recent years is: Monitoring in DB2 9.7, Part 1: Emulating data reset with the new DB2 9.7 monitoring table functions. This excellent article includes the scripts you need to implement an emulation of “RESET MONITOR SWITCHES” with the lightweight monitoring routines and views in DB2 9.7 and above. I have extended the methodology for my own personal use – to include the package cache and some other tidbits, and also to keep history tables of data so my old scripts that took snapshots hourly – instead they write to tables, and it is so easy to look through that data with SQL for performance trends or to pinpoint issues.

There are actually some configuration parameters that control what they collect, too. Unlike the old snapshot monitoring interface, these parameters are in the DB cfg. They look like this:

$ db2 get db cfg for wc005p01 |grep MON_
 Request metrics                       (MON_REQ_METRICS) = BASE
 Activity metrics                      (MON_ACT_METRICS) = BASE
 Object metrics                        (MON_OBJ_METRICS) = BASE
 Unit of work events                      (MON_UOW_DATA) = NONE
 Lock timeout events                   (MON_LOCKTIMEOUT) = WITHOUT_HIST
 Deadlock events                          (MON_DEADLOCK) = WITHOUT_HIST
 Lock wait events                         (MON_LOCKWAIT) = NONE
 Lock wait event threshold               (MON_LW_THRESH) = 5000000
 Number of package list entries         (MON_PKGLIST_SZ) = 32
 Lock event notification level         (MON_LCK_MSG_LVL) = 2

Like the DFT monitor switches for the old snapshot monitoring, changes to these take place online. However, they take effect for new connections only – existing connections are not affected. This could be problematic for an application that retains the same connections for long periods of time. Also, there is no way to turn them on for only a particular session like monitor switches – they’re either on or they’re not.

The same information is available in the IBM DB2 Info Center for each monitor element. To look at the same one as in the previous section:
info_center_mon_ele
Click on the image to go to that page in the IBM DB2 Info Center if you like. Notice Table 1 – the right hand column tells us what parameter and setting we need to collect data for this element. In the case of the example here, to get the data for this monitoring element, MON_OBJ_METRICS must be set to BASE or higher.

Most of these parameters allow a setting of “BASE”, “NONE”, or “EXTENDED” and default to “BASE” – which I am much happier about than the default settings for the snapshot monitoring interface that I have always disagreed with. Unlike the old snapshot monitoring, some of these settings can affect what event monitors collect too. See my post on Analyzing Deadlocks – the new way to see an example of how that works.

The info center on each of these parameters tells us what information they pertain to. The ones to focus on that roughly equate to the same kind of data as the old snapshot monitoring interface are:

  • MON_REQ_METRICS
  • MON_ACT_METRICS
  • MON_OBJ_METRICS

MON_REQ_METRICS

Monitoring Request Metrics
The default for databases migrated from previous DB2 versions is NONE. For newly created databases, the default is BASE.
The possible values are NONE, BASE, and EXTENDED
Setting the parameter to BASE or EXTENDED will cause data for the following to be collected:

  • MON_GET_UNIT_OF_WORK
  • MON_GET_CONNECTION
  • MON_GET_SERVICE_SUBCLASS
  • MON_GET_WORKLOAD
  • Statistics event monitor (you can only access this data if event monitor exists)
  • Unit of work event monitor (you can only access this data if event monitor exists)

MON_GET_WORKLOAD is actually the one I use in place of a database snapshot, so this is an important one

MON_ACT_METRICS

Monitoring Activity Metrics
The default for databases migrated from previous DB2 versions is NONE. For newly created databases, the default is BASE.
The possible values are NONE, BASE, and EXTENDED
Setting the parameter to BASE or EXTENDED will cause data for the following to be collected:

  • MON_GET_ACTIVITY_DETAILS
  • MON_GET_PKG_CACHE_STMT
  • Activity event monitor (you can only access this data if event monitor exists)

MON_GET_PKG_CACHE_STMT is probably my favorite table function, so this one is critical for me.

MON_OBJ_METRICS

Monitoring Object Metrics
The default for databases migrated from previous DB2 versions is NONE. For newly created databases, the default is BASE.
The possible values are NONE, BASE, and EXTENDED
Setting the parameter to BASE or EXTENDED will cause data for the following to be collected:

  • MON_GET_BUFFERPOOL
  • MON_GET_TABLESPACE
  • MON_GET_CONTAINER

This one would be critical if you were tuning I/O or memory – pretty critical areas, too.

The recommendation for all three of these parameters would be to set them at BASE.

Summary

In DB2 9.7 there are two worlds of performance monitoring, and a transition is occurring from the old snapshot monitoring to the new monitoring routines and views. Many of the same system monitor elements are available in each method. In the IBM DB2 Info Center page that we looked at above, in the left hand column of tables 1 and 2 you will see what table functions or snapshots you can use to see a particular element. Every row in a snapshot and every column in the table functions and views are represented with a similar page in the IBM DB2 Info Center.

New functionality is being added to the new monitoring routines and views – there are things there that you can’t get otherwise. Things like static SQL from the package cache – something with the old method that you had to use an event monitor for – which has far more database impact and also a lot of data to parse through. I’m not sure if IBM is intentionally removing some monitor elements that were in the old snapshots, or if there are some they just haven’t gotten around to, or if they have a different approach in mind, but there are also elements that are simply missing from the new monitoring routines and views. Ones that I’ve looked for lately and found missing include “connections_top” and “x_lock_escals”.

If you’ve been a DBA for a while or support particularly old versions of DB2, it’s best to know both of them. If you are a new DBA, focus on the newer methodology.

HADR_TIMEOUT vs. HADR_PEER_WINDOW

$
0
0

It has taken me a while to fully understand the difference between HADR_TIMEOUT and HADR_PEER_WINDOW. I think there is some confusion here, so I’d like to address what each means and some considerations when setting them. In general, you’ll only need HADR_TIMEOUT when using HADR and only need HADR_PEER_WINDOW when using TSA(db2haicu) or some other automated failover tool.

HADR_TIMEOUT

HADR Timeout defines, in seconds, the time after unavailability of the other HADR server is first noticed that the HADR state will change from connected to disconnected. If you are starting HADR on the primary server, then if the primary server cannot connect to the standby in this number of seconds, the start will fail and HADR will not be running. Assuming no failover software and the setting of HADR_PEER_WINDOW to 0, The primary server will continue processing transactions without sending them to the standby. It will periodically retry the connection to the standby, and if the standby becomes available it will again start processing transactions with commits tied to the requirements of the SYNCMODE being used.

If attempting a takeover without force, DB2 will wait this amount of time to attempt to communicate with the other server before failing and returning an error message.

The real point of this time period is to allow minor network hiccups to occur without other action being taken, but yet to consider the connection failed so as not to impede transactions after a reasonable period of time.

Setting this value depends on your network. I have a client with frequent network issues where I keep this value at 300. I have other clients where I use simply 120, which seems to work well for most environments. I have seen it set as low as 10 seconds for a very highly available network where seconds of slowdown are not very acceptable, but would be very cautious setting it that low.

HADR_PEER_WINDOW

This parameter is not usually used when only HADR is in place with manual failover. But it is critical if using an automated failover for HADR such as TSA(db2haicu) or others. This tells DB2 how long AFTER the connection is considered failed to continue to behave as if the connection were not failed. Now that may sound a bit odd. But the real intention here is to allow the connection to be considered failed, and then give time for that failure to be detected by the failover automation software before any transactions are allowed to complete and compromise the data. This means you can easily have connections waiting for as much as HADR_TIMEOUT plus HADR_PEER_WINDOW before a failover is completed and your database is again available.

Most frequently I see HADR_PEER_WINDOW set to 300 out of an abundance of caution – actual takeovers do not generally take that long, though in a failure state there may be multiple factors slowing down the failover.

SQL5005C and Ulimit Issues

$
0
0

I’m spoiled. While we build a fair number of environments each year, we also have basic starting standards. Because of this, I sometimes miss the basics when a problem shows up. Or at least it takes me longer to get there.

In this case, we had a couple of alerts over the high-volume weekend (Black Friday 2013). They were alerts from our connection monitor. We had done some tuning before the holiday, which included increasing MAXFILOP. This database is largely SMS tablespaces and an older version of DB2 (and WCS 6). The alerts were transient – as soon as someone logged in to look at them, connections were working just fine. Looking in the db2 diag log on Monday morning, I saw a number of entries like this:

2013-12-02-09.49.35.996713-300 I634382E367        LEVEL: Severe
PID     : 10811                TID  : 47251525193888PROC : db2agent (ESB19Q02) 0
INSTANCE: db2inst1             NODE : 000         DB   : ESB19Q02
APPHDL  : 0-1206               APPID: *LOCAL.db2inst1.133012144937
FUNCTION: DB2 UDB, base sys utilities, sqleserl, probe:10
RETCODE : ZRC=0xFFFFEC73=-5005

2013-12-02-09.49.35.996180-300 I632343E481        LEVEL: Error
PID     : 10811                TID  : 47251525193888PROC : db2agent (ESB19Q02) 0
INSTANCE: db2inst1             NODE : 000         DB   : ESB19Q02
APPHDL  : 0-1206               APPID: *LOCAL.db2inst1.133012144937
FUNCTION: DB2 UDB, config/install, sqlf_read_db_and_verify, probe:30
MESSAGE : SQL5005: sqlf_openfile rc = 
DATA #1 : Hexdump, 4 bytes
0x00007FFFFEC5864C : 0600 0F85   

One time, I actually managed to catch the error at the command line – it looked like this:

$ db2 connect to esb19q02
SQL5005C  System Error.

In researching this, I found this helpful technote: http://www-01.ibm.com/support/docview.wss?uid=swg21403936

And while I first thought that I needed to increase MAXFILOP, I figured out that it was really the ulimit that was my problem:

$ ulimit -a
...
open files                      (-n) 1024
...

This particular instance had three databases on it, all with SMS tablespaces, and one with over a thousand tables. The settings for MAXFILOP for the three databases added up to 4096.

In order to increase the limit, I added the following lines to /etc/security/limits.conf, as root:

db2inst1    soft    nofile    16192
db2inst1    hard    nofile    16192

… where db2inst1 is my instance owner.

Modifying the ulimit as the instance owner itself did not work:

$ ulimit -n 16192
-bash: ulimit: open files: cannot modify limit: Operation not permitted

Unfortunately, these settings do not take effect until the next time the database manager is started (db2stop/db2start), so I had to schedule that outage. I could have also done it with a failover to avoid the actual outage.

To prevent the issue, MAXFILOP could actually be lowered across the databases, with the side effect of possibly decreasing database performance, but preventing an actual inability to connect.

With the modifications to make automatic storage tablespaces so easy to use, and the default, I see fewer and fewer databases making extensive use of SMS tablespaces.

Do you see a ‘Congested’ State for HADR while Performing Reorgs?

$
0
0

I monitor HADR closely, and want to be paged out if HADR goes down. Why? Because any subsequent failure would then be a serious problem. Once HADR has a problem, I suddenly have a single point of failure. That said, for some clients we have to tell our monitoring tools not to alert on HADR during online reorgs. The reason for that is that HADR tends to get into a ‘Congested’ state during online reorgs for certain databases.

Describing the Problem

Essentially what we’re seeing is network congestion. We can see it for one of two major reasons:

  1. A lot of log files are being generated, and the standby is having trouble keeping up
  2. An operation that takes a long time such as an offline reorg, but requires relatively few log entries

In the case of online reorgs, I have certainly seen a lot of logging occurring.
This is what the problem looks like when you get HADR status using the db2pd command:

HADR Information:
Role    State                SyncMode HeartBeatsMissed   LogGapRunAvg (bytes)
Primary Peer                 Nearsync 0                  991669

ConnectStatus ConnectTime                           Timeout
Congested     Wed Nov  25 20:31:26 2010 (1283970686) 120

On the primary, DB2 will write the active log file containing the start of the reorg off to disk, so transactions are not impacted there, but on the standby, DB2 cannot do that – it needs to apply whatever is in that log buffer immediately, and if it cannot, it has to wait until the operation in question is finished before moving on.

To prevent the Primary and Standby getting out of sync, DB2 uses this Congested state, and it may actually cause transactions to wait. If, like me, you run a number of reorgs in a window, you may see DB2 going in and out of this state as various tables are reorged. I’m also pretty sure that this doesn’t happen from the start of a reorg to the end of a reorg – there are internal phases or parts of the reorg that may cause this scenario – mostly because I’ve had 5-hour reorgs without hitting a congested state. The level of other activity on the database would probably also play a role.

What are the Effects of the Problem?

When in a ‘Congested’ state, DB2 can prevent transactions from completing on the Primary in SYNC, NEARSYNC, and even ASYNC mode. Only SUPERASYNC mode would be immune to this issue. I tend to run my “online” reorgs at my lowest point of volume anyway to prevent unexpected user impact, so I’ve only heard complaints from automated monitoring, and those even were short lived (less than a couple of minutes). This can all make it pretty frustrating to troubleshoot. Intermittent outages during reorgs of varying duration. Note that just because the status is congested does not mean transactions are being blocked or slowed, but they can be.

Resolving in DB2 10.1 and Later

DB2 10.1 and later have a database configuration parameter – HADR_SPOOL_LIMIT – which you can use to specify how much log you would like to spool to disk on the standby server. It is specified in 4K pages, but there are two special settings which may be useful:

  • -1 means unlimited – fill up all the disk space you have in your logging filesystem on your standby
  • -2 (AUTOMATIC) means fill up configured log space (may only be available on DB2 10.5)

In DB2 10.1, the default is no log spooling, while in DB2 10.5, the default is AUTOMATIC or -2.

Inevitably a reader at this point is wondering “doesn’t that compromise my ability to failover?”. No, allowing logs to spool to disk does not affect your ability to failover – this just makes the standby behave more like the primary on logging. What it does do is potentially increase the time that a failover would take. You would still get all the data, but you would then have to roll forward through any log files on disk before the database would become available on the standby.

Resolving before DB2 10.1

Before 10.1, there isn’t a full resolution – we can either accept the slowness caused by the congestion, OR we can increase the size of the log buffer on the standby. By default, the log buffer on the standby is two times that on the primary. This is handled by the DB2 registry parameter DB2_HADR_BUF_SZ. The reason we don’t want to control this by actually changing LOGBUFSZ is that in a failover, we’d want a normal log buffer on the (new) primary. This parameter defaults to 2 times the size of the log buffer. Its maximum is supposedly 4 GB, but reading and searching around, I wouldn’t set it over 1 GB.

I learned about this work-around only a couple of weeks ago, and am already using it on at least one client. Another nice thing – this parameter is available all the way back to DB2 8.2, so for those of us with some back-level databases it still works.

Here’s the drawback of the approach though – consider a failure scenario in NEARSYC. If both database servers were to fail at the same moment, AND I couldn’t get the primary database server back, I would lose anything in that larger memory area. Granted, that’s a rare failure scenario, but certainly within the realm of the possible. Unlike the resolution for DB2 10.5, you are changing your recoverability, and you have to decide if the benefits are worth the risks.

A big thanks to Melanie Stopfer @mstopfer1 and Dale McInnis for covering this in their presentations at IBM IOD – this was my biggest immediate-impact technical take-away from that conference.

Informational Constraints – Benefits and Drawbacks

$
0
0

Krafick_HeadshotOne of the most frustrating things a DBA can experience is troubleshooting due to bad data. The client is upset because rows are missing or incorrect data is returned.  The client facing web front end could be displaying gobilty-gook because the data retrieved makes no sense. Resources and energy are burned because of an issue is easily solved with the proper use of constraints.

So why would I ever want a check constraint to be created on a table but NOT ENFORCED?

Constraints are a double-edged sword. They protect the database from poor data quality at the cost of overhead associated with that constraint. If you load a large amount of data that must be verified with constraints you can add some significant overhead.

If the application is configured to verify the data being fed into the database, do we really need the additional overhead of database constraints? Data verification in both the application and the database is a duplication of effort and leads wasted resources and elongated time.

A constraint that is NOT ENFORCED informs the database manager what the data would normally look like and how it would behave under constraint, but data that violates the constraint is not prevented from accessing the target table. This is called an informational constraint.

What is the benefit of an informational constraint? Optimizer can base its query plan on what it believes the data should look like because of the constraint’s definition. This leads to improved performance.

The catch? What happens with bad data? Let’s take a look at an example.

In this specific scenario, assume we have a simple table that is the back end of a project management tool. On the web front end, the application takes in a project name and its current status. Responses can be “Green” for a project in good health, “Yellow” for a project facing challenges, and “Red” for a project at a standstill.

The table definition could look like this (Notice the NOT ENFORCED clause):

 

CREATE TABLE COMMAND with an informational constraint.

CREATE TABLE COMMAND with an informational constraint.

 

Once the data is inserted, we have the following project data:

Project Management Data

 

This is great! If the application is vetting data before it comes in, the constraint is NOT ENFORCED so there is no overhead and the ENABLE QUERY OPTIMIZATION clause tells DB2 that it can lean on this constraint to help generate a great optimization plan.

But what happens if the data is not vetted by the application.  For example:

What happens when bad data is inserted?

What happens when bad data is inserted?

The table would be loaded with incorrect data, invalid for queries to the database. Our table now is in the following state:

 

Bad data bypasses a NOT ENFORCED constraint and is inserted

Bad data bypasses a NOT ENFORCED constraint and is inserted

 

Here is the weakness of an informational constraint. Optimizer assumes  rows that violate the constraint don’t exist in the table. As a result, a query may not include the invalid rows when they should be returned. Although they exist, DB2 is convinced otherwise because of the informational constraint.

So the following query could return the following output:

Returned rows are missing data that is actually there.

Returned rows are missing data that is actually there.

 

Notice that optimizer believes the rows can’t be there so the invalid rows are not displayed. IBM’s Knowledge Center (as well as other training material) states that the rows may not display. This leads me to believe that two separate queries could return two separate results. Bad data now leads to inconsistent queries.

This weakness of an informational constraint could be too large point of failure for some administrators. However, the benefits of an informational constraint seem to outweigh the risks if your application is configured properly. Imagine the overhead saved and speed increase on a massive data warehouse with an overnight load cycle of hundreds of gigs.

 


Michael Krafick is an occasional contributor to db2commerce.com. He has been a production support DBA for over 12 years in data warehousing and highly transactional OLTP environments. He was acknowledged as a top ten session speaker for “10 Minute Triage” at the 2012 IDUG Technical Conference. Michael also has extensive experience in setting up monitoring configurations for DB2 Databases as well as preparing for high availability failover, backup, and recovery. He can be reached at “Michael.Krafick (at) icloud (dot) com”. Linked-in Profile: http://www.linkedin.com/in/michaelkrafick. Twitter: mkrafick

Mike’s blog posts include:
10 Minute Triage: Assessing Problems Quickly (Part I)
10 Minute Triage: Assessing Problems Quickly (Part II)
Now, now you two play nice … DB2 and HACMP failover
Technical Conference – It’s a skill builder, not a trip to Vegas.
Why won’t you just die?! (Cleaning DB2 Process in Memory)
Attack of the Blob: Blobs in a Transaction Processing Environment
Automatic Storage Tablespaces (AST): Compare and Contrast to DMS
DB2 v10.1 Column Masking
Automatic Storage (AST) and DMS
Reloacting the Instance Home Directory

STMM Analysis Tool

$
0
0

I mostly like and use DB2’s Self Tuning Memory Memory Manager (STMM) for my OLTP databases where I have only one DB2 Instance/Database on a database server. I do have some areas that I do not let it set for me. I’ve recently learned about an analysis tool – Adam Storm did a presentation that mentioned it at IDUG 2014 in Phoenix.

Parameters that STMM Tunes

To begin with, it is important to understand what STMM tunes and what it doesn’t. I recommend reading When is ‘AUTOMATIC’ Not STMM?. There are essentially only 5 areas that STMM can change:

  1. DATABASE_MEMORY if AUTOMATIC
  2. SORTHEAP, SHEAPTHRES_SHR if AUTOMATIC, and SHEAPTHRES is 0
  3. BUFFERPOOLS if number of pages on CREATE/ALTER BUFFERPOOL is AUTOMATIC
  4. PCKCACHESZ if AUTOMATIC
  5. LOCKLIST, MAXLOCKS if AUTOMATIC (both must be automatic)

Any other parameters, even if they are set to “AUTOMATIC” are not part of STMM.

Why I don’t use STMM for PCKCACHESZ

A number of the e-commerce database servers I support are very much oversized for daily traffic. This is common for retail sites because there are always peak periods, and servers tend to be sized to handle those. Many retail clients have extremely drastic peak periods like Black Friday, Cyber Monday, or other very critical selling times.

I noticed for one of my clients that was significantly oversized on Memory, DB2 was making the package cache absolutely huge. I saw this:

Package cache size (4KB)                   (PCKCACHESZ) = AUTOMATIC(268480)

That’s a full GB allocated to the package cache. There were over 30,000 statements in package cache, the vast majority with only a single execution. The thing is that for my OLTP databases the statements for which performance is critical are often static SQL or they’re using parameter markers. Most of the ad-hoc statements that are only executed once I don’t really care if they’re stored in package cache. This was about a 50-100 GB database on a server with 64 GB of memory. The buffer pool hit ratios were awesome, so I guess DB2 didn’t really need the memory there, but still. In my mind, for well-run OLTP databases, that much package cache does not help performance. I am certain there may be databases that need that much or more in the Package Cache, but this database was simply not one of them. Because of this experience I set the package cache manually and tune it properly.

A few STMM Caveats

Just a few things to note – I have heard rumors of issues with STMM when there are multiple DB2 instances running on a server. I have not personally experienced this. Also, the settings that STMM is using are not transferred at all to the HADR standby, so when you fail over, you may have poor performance while STMM starts up. You could probably script a regular setting of the STMM parameters to deal with this. Also if you have a well-tuned, well performing non-STMM database there is probably little reason and not much reward in changing it to STMM. Most experts with database performance can likely tune the database better than STMM, but we can’t all be performance experts, or give as much time as we’d like to every database we support.

The STMM Log Parser

STMM logs the changes it make in parameter sizes both to the db2diag.log and to some STMM log files. (hint: IBM, maybe these could be used to periodically update the HADR standby too?). The log files are in the stmmlog subdirectory of the DIAGPATH. The log files aren’t exactly tough to read, but they don’t really present the information in an easy to view way. Entries look a bit like diagnostic log entries:

2014-07-02-23.44.40.788684+000 I10464203A600        LEVEL: Event
PID     : 18677976             TID : 46382          PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000           DB   : WC42P1L1
APPHDL  : 0-12466              APPID: *LOCAL.DB2.140620223552
AUTHID  : DB2INST1             HOSTNAME: ecprwdb01s
EDUID   : 46382                EDUNAME: db2stmm (WC42P1L1) 0
FUNCTION: DB2 UDB, Self tuning memory manager, stmmMemoryTunerMain, probe:2065
DATA #1 : String, 115 bytes
Going to sleep for 180000 milliseconds.
Interval = 5787, State = 0, intervalsBeforeStateChange = 0, lost4KPages = 0

2014-07-02-23.47.40.807231+000 I10464804A489        LEVEL: Event
PID     : 18677976             TID : 46382          PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000           DB   : WC42P1L1
APPHDL  : 0-12466              APPID: *LOCAL.DB2.140620223552
AUTHID  : DB2INST1             HOSTNAME: ecprwdb01s
EDUID   : 46382                EDUNAME: db2stmm (WC42P1L1) 0
FUNCTION: DB2 UDB, Self tuning memory manager, stmmMemoryTunerMain, probe:1909
MESSAGE : Activation stage ended

2014-07-02-23.47.40.807661+000 I10465294A488        LEVEL: Event
PID     : 18677976             TID : 46382          PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000           DB   : WC42P1L1
APPHDL  : 0-12466              APPID: *LOCAL.DB2.140620223552
AUTHID  : DB2INST1             HOSTNAME: ecprwdb01s
EDUID   : 46382                EDUNAME: db2stmm (WC42P1L1) 0
FUNCTION: DB2 UDB, Self tuning memory manager, stmmMemoryTunerMain, probe:1913
MESSAGE : Starting New Interval

2014-07-02-23.47.40.808193+000 I10465783A925        LEVEL: Event
PID     : 18677976             TID : 46382          PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000           DB   : WC42P1L1
APPHDL  : 0-12466              APPID: *LOCAL.DB2.140620223552
AUTHID  : DB2INST1             HOSTNAME: ecprwdb01s
EDUID   : 46382                EDUNAME: db2stmm (WC42P1L1) 0
FUNCTION: DB2 UDB, Self tuning memory manager, stmmLogRecordBeforeResizes, probe:590
DATA #1 : String, 435 bytes

***  stmmCostBenefitRecord ***
Type: LOCKLIST
PageSize: 4096
Benefit:
  -> Simulation size: 75
  -> Total seconds saved: 0 (+ 0 ns)
  -> Normalized seconds/page: 0
Cost:
  -> Simulation size: 75
  -> Total seconds saved: 0 (+ 0 ns)
  -> Normalized seconds/page: 0
Current Size: 27968
Minimum Size: 27968
Potential Increase Amount: 13984
Potential Increase Amount From OS: 13984
Potential Decrease Amount: 0
Pages Available For OS: 0

2014-07-02-23.47.40.808580+000 I10466709A993        LEVEL: Event
PID     : 18677976             TID : 46382          PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000           DB   : WC42P1L1
APPHDL  : 0-12466              APPID: *LOCAL.DB2.140620223552
AUTHID  : DB2INST1             HOSTNAME: ecprwdb01s
EDUID   : 46382                EDUNAME: db2stmm (WC42P1L1) 0
FUNCTION: DB2 UDB, Self tuning memory manager, stmmLogRecordBeforeResizes, probe:590
DATA #1 : String, 502 bytes

***  stmmCostBenefitRecord ***
Type: BUFFER POOL ( BUFF_REF16K )
PageSize: 16384
Saved Misses: 0
Benefit:
  -> Simulation size: 2560
  -> Total seconds saved: 0 (+ 0 ns)
  -> Normalized seconds/page: 0
Cost:
  -> Simulation size: 2560
  -> Total seconds saved: 0 (+ 0 ns)
  -> Normalized seconds/page: 0
Current Size: 25000
Minimum Size: 5000
Potential Increase Amount: 12500
Potential Increase Amount From OS: 12500
Potential Decrease Amount: 5000
Pages Available For OS: 5000
Interval Time: 180.029

Scrolling through each 10 MB file of this is not likely to give us a complete picture very easily. IBM is offering us, through developerWorks a log parser tool for STMM. The full writeup on it is here: http://www.ibm.com/developerworks/data/library/techarticle/dm-0708naqvi/index.html

The tool is free, and is a Perl script that DBAs can modify if they like. AIX and Linux tend to include Perl, and it’s not hard to install on Windows using ActivePerl or a number of other options. I happen to rather like a Perl utility as I do the vast majority of my database maintenance scripting in Perl.

Download and Set Up

The developerWorks link above includes the Perl script. Scroll down to the “download” section, click on “parseStmmLogFile.pl”, if you accept the terms and conditions, click “I Accept the Terms and Conditions”, and save the file. Then upload it to the database server you wish to use it on.

Syntax

There are several options here. Whenever you execute it, you will need to specify the name of one of your STMM logs, and the database name. The various options beyond that are covered below.

Examples

The default if you specify nothing beyond the file name and the database name is the s option. This gives you the new size at each interval of each heap that STMM manages. The output looks something like this:

 ./parseStmmLogFile.pl /db2diag/stmmlog/stmm.43.log SAMPLE s
# Database: SAMPLE
[ MEMORY TUNER - LOG ENTRIES ]
[ Interv ]      [        Date         ] [ totSec ]      [ secDif ]      [ newSz ]
[        ]      [                     ] [        ]      [        ]      [ LOCKLIST  BUFFERPOOL - BUFF16K:16K BUFFERPOOL - BUFF32K:32K BUFFERPOOL - BUFF4K BUFFERPOOL - BUFF8K:8K BUFFERPOOL - BUFF_CACHEIVL:8K BUFFERPOOL - BUFF_CAT16K:16K BUFFERPOOL - BUFF_CAT4K BUFFERPOOL - BUFF_CAT8K:8K BUFFERPOOL - BUFF_CTX BUFFERPOOL - BUFF_REF16K:16K BUFFERPOOL - BUFF_REF4K BUFFERPOOL - BUFF_REF8K:8K BUFFERPOOL - BUFF_SYSCAT BUFFERPOOL - BUFF_TEMP16K:16K BUFFERPOOL - BUFF_TEMP32K:32K BUFFERPOOL - BUFF_TEMP4K BUFFERPOOL - BUFF_TEMP8K:8K BUFFERPOOL - IBMDEFAULTBP ]
[      1 ]      [ 02/07/2014 00:17:27 ] [    180 ]      [    180 ]      [ 27968 12500 2500 2000000 50000 500000 25000 1000000 50000 1000000 25000 1000000 50000 50000 1000 1000 1000 1000 10000 ]
[      2 ]      [ 02/07/2014 00:20:27 ] [    360 ]      [    180 ]      [ 27968 12500 2500 2000000 50000 500000 25000 1000000 50000 1000000 25000 1000000 50000 50000 1000 1000 1000 1000 10000 ]
[      3 ]      [ 02/07/2014 00:23:27 ] [    540 ]      [    180 ]      [ 27968 12500 2500 2000000 50000 500000 25000 1000000 50000 1000000 25000 1000000 50000 50000 1000 1000 1000 1000 10000 ]
[      4 ]      [ 02/07/2014 00:26:27 ] [    720 ]      [    180 ]      [ 27968 12500 2500 2000000 50000 500000 25000 1000000 50000 1000000 25000 1000000 50000 50000 1000 1000 1000 1000 10000 ]
[      5 ]      [ 02/07/2014 00:29:27 ] [    900 ]      [    180 ]      [ 27968 12500 2500 2000000 50000 500000 25000 1000000 50000 1000000 25000 1000000 50000 50000 1000 1000 1000 1000 10000 ]

If you have a number of bufferpools, this can be hard to read, even on a large screen. the width of the numeric values is not hte same as their names, making it not all that tabular. To fix that, you can try the d option, which delimits the output with semicolons, making it easier to get into your favorite spreadsheet tool. The output in that case, raw looks like this:

./parseStmmLogFile.pl /db2diag/stmmlog/stmm.43.log SAMPLE s d
# Database: SAMPLE
MEMORY TUNER - LOG ENTRIES
Interval;Date;Total Seconds;Difference in Seconds; LOCKLIST  ;  BUFFERPOOL - BUFF16K:16K ;  BUFFERPOOL - BUFF32K:32K ;  BUFFERPOOL - BUFF4K ;  BUFFERPOOL - BUFF8K:8K ;  BUFFERPOOL - BUFF_CACHEIVL:8K ;  BUFFERPOOL - BUFF_CAT16K:16K ;  BUFFERPOOL - BUFF_CAT4K ;  BUFFERPOOL - BUFF_CAT8K:8K ;  BUFFERPOOL - BUFF_CTX ;  BUFFERPOOL - BUFF_REF16K:16K ;  BUFFERPOOL - BUFF_REF4K ;  BUFFERPOOL - BUFF_REF8K:8K ;  BUFFERPOOL - BUFF_SYSCAT ;  BUFFERPOOL - BUFF_TEMP16K:16K ;  BUFFERPOOL - BUFF_TEMP32K:32K ;  BUFFERPOOL - BUFF_TEMP4K ;  BUFFERPOOL - BUFF_TEMP8K:8K ;  BUFFERPOOL - IBMDEFAULTBP ; ;
1;02/07/2014 00:17:27;180;180; 27968; 12500; 2500; 2000000; 50000; 500000; 25000; 1000000; 50000; 1000000; 25000; 1000000; 50000; 50000; 1000; 1000; 1000; 1000; 10000;
2;02/07/2014 00:20:27;360;180; 27968; 12500; 2500; 2000000; 50000; 500000; 25000; 1000000; 50000; 1000000; 25000; 1000000; 50000; 50000; 1000; 1000; 1000; 1000; 10000;
3;02/07/2014 00:23:27;540;180; 27968; 12500; 2500; 2000000; 50000; 500000; 25000; 1000000; 50000; 1000000; 25000; 1000000; 50000; 50000; 1000; 1000; 1000; 1000; 10000;
4;02/07/2014 00:26:27;720;180; 27968; 12500; 2500; 2000000; 50000; 500000; 25000; 1000000; 50000; 1000000; 25000; 1000000; 50000; 50000; 1000; 1000; 1000; 1000; 10000;
5;02/07/2014 00:29:27;900;180; 27968; 12500; 2500; 2000000; 50000; 500000; 25000; 1000000; 50000; 1000000; 25000; 1000000; 50000; 50000; 1000; 1000; 1000; 1000; 10000;

Save it off to a file, import it into a spreadsheet, and you get something like this:
STMM_log_parser_output_s-1

Ok, and finally, you can make a pretty graph to look at these in a more human way:
STMM_log_parser_output_chart
Now that would be a lot more exciting if I ran it on a database where things were changing more often, but that’s the one I have to play with at the moment.

There are some other interesting options besides the s option. The b option shows the benefit analysis that STMM does, which looks pretty boring on my database, but still:

./parseStmmLogFile.pl /db2diag/stmmlog/stmm.43.log SAMPLE b
# Database: SAMPLE
[ MEMORY TUNER - LOG ENTRIES ]
[ Interv ]      [        Date         ] [ totSec ]      [ secDif ]      [ benefitNorm ]
[        ]      [                     ] [        ]      [        ]      [ LOCKLIST  BUFFERPOOL - BUFF16K:16K BUFFERPOOL - BUFF32K:32K BUFFERPOOL - BUFF4K BUFFERPOOL - BUFF8K:8K BUFFERPOOL - BUFF_CACHEIVL:8K BUFFERPOOL - BUFF_CAT16K:16K BUFFERPOOL - BUFF_CAT4K BUFFERPOOL - BUFF_CAT8K:8K BUFFERPOOL - BUFF_CTX BUFFERPOOL - BUFF_REF16K:16K BUFFERPOOL - BUFF_REF4K BUFFERPOOL - BUFF_REF8K:8K BUFFERPOOL - BUFF_SYSCAT BUFFERPOOL - BUFF_TEMP16K:16K BUFFERPOOL - BUFF_TEMP32K:32K BUFFERPOOL - BUFF_TEMP4K BUFFERPOOL - BUFF_TEMP8K:8K BUFFERPOOL - IBMDEFAULTBP ]
[      1 ]      [ 02/07/2014 00:17:27 ] [    180 ]      [    180 ]      [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]
[      2 ]      [ 02/07/2014 00:20:27 ] [    360 ]      [    180 ]      [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]
[      3 ]      [ 02/07/2014 00:23:27 ] [    540 ]      [    180 ]      [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]
[      4 ]      [ 02/07/2014 00:26:27 ] [    720 ]      [    180 ]      [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]
[      5 ]      [ 02/07/2014 00:29:27 ] [    900 ]      [    180 ]      [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]

The o option shows only database memory and overflow buffer tuning:

./parseStmmLogFile.pl /db2diag/stmmlog/stmm.43.log SAMPLE o
# Database: SAMPLE
[ MEMORY TUNER - DATABASE MEMORY AND OVERFLOW BUFFER TUNING - LOG ENTRIES ]
[ Interv ]      [        Date         ] [ totSec ]      [ secDif ]      [ configMem ]   [ memAvail ]    [ setCfgSz ]
[      1 ]      [ 02/07/2014 00:17:27 ] [    180 ]      [    180 ]      [ 6912 ]        [ 6912 ]        [ 1990 ]
[      2 ]      [ 02/07/2014 00:20:27 ] [    360 ]      [    180 ]      [ 6912 ]        [ 6912 ]        [ 1990 ]
[      3 ]      [ 02/07/2014 00:23:27 ] [    540 ]      [    180 ]      [ 6912 ]        [ 6912 ]        [ 1990 ]
[      4 ]      [ 02/07/2014 00:26:27 ] [    720 ]      [    180 ]      [ 6912 ]        [ 6912 ]        [ 1990 ]
[      5 ]      [ 02/07/2014 00:29:27 ] [    900 ]      [    180 ]      [ 6912 ]        [ 6912 ]        [ 1990 ]

There is also a 4 option that you can use to convert all values to 4k pages.

Summary

There are some useful things in the STMM log parser, if you want to understand the changes DB2 is making. Many of us, coming from fully manual tuning naturally distrust what STMM or other tuning tools are doing, so this level of transparency helps us understand what is happening and why it is or is not working in our environments. I would love to see more power in this. Being able to query this data with a table function or administrative view (we can with the db2diag.log!) would be even more useful so the output could be further limited and tweaked. The script is well documented, and I imagine I could tweak it to limit it if I wanted to. I’d love to have it call out actual changes – that would be harder to graph, but for the text output, could be more useful for a fairly dormant system.

Quick Hit Tips – CPUSPEED, RESTRICTIVE, and DB2_WORKLOAD

$
0
0

Krafick_Headshot Today we are going to talk about some random DB2 features that can’t stand in a blog of their own, but are worth discussing nonetheless. These are tidbits I had discovered during “DB2’s Got Talent” presentations, IDUG conferences, or “Hey, look what I discovered” moments.

CPUSPEED (Database Management Configuration)

You blow past this setting every time you execute “db2 get dbm cfg”. It’s located at the very top of your output and is one of the more important settings that is overlooked. The result for this parameter is set after DB2 looks at the CPU and determines how fast instructions process through (millisec/instruction).

Optimizer is influenced greatly by this setting. CPUSPEED is automatically set upon instance creation and is often never examined again. The setting will stay static unless you ask DB2 to re-examine the CPU.

So, why do we care? Well there could be a few reasons.

  • An additional CPU was added or subtracted from an LPAR.
  • An image of your old server was taken and placed on a new, faster server, with a different type or number of processors.

If for some reason your CPU configuration was altered or changed the new processing speed is not taken into account until it is re-evaluated. So go ahead and add on that additional CPU to handle your black Friday workload. It won’t help much unless DB2 knows to take it into account.

To re-evaluate:
db2 "update dbm cfg using CPUSPEED -1"

Once done, you should see a new CPUSPEED displayed for your DBM configuration.

If you are comparing apples to apples you want to see the number get smaller to show a speed increase. If you are changing architecture (P6 to P7 for example) the number could theoretically go up. Put that in context though, the higher number could be the setting DB2 needs to account for multi-threading or some other hardware change. So it may look worse when it really isn’t.

Once done, DB2 will use the new number to determine proper access paths. So make sure to issue a REBIND once done so your SQL can take advantage of the speed increase.

(Special thanks to Robert Goethel who introduced this topic during the DB2 Night Show competition this year).

RESTRICTIVE (Create Database …. RESTRICTIVE)

I picked this up in Roger Sander’s DB2 Crammer Course at IDUG this year. I had just spent the past two months auditing our database authorities and was frustrated with the amount of PUBLIC access. I even created a separate SQL script to run after new database creation to revoke some of the default PUBLIC authority.

Apparently I reinvented the wheel. If you use the RESTRICTIVE clause in the CREATE DATABASE command no privileges or authorities will automatically be granted to PUBLIC.

For example:
db2 “create database warehouse on /data1 dbpath on /home/db2inst1 restrictive”

DB2_WORKLOAD (System Environment Variable)

This db2set parameter has been available for a while but I know a new option (ANALYTICS) became available with v10.5. Essentially, DB2_WORKLOAD will preset a group of environment variables for your needs. Set it once and go – no need to look up various configurations or develop scripts. This is valuable for  various application configurations such as BLU or Cognos.

To activate:
db2set DB2_WORKLOAD <option>

1C 1C Applications
Analytics Analytics Workload
CM IBM Content Manager
COGNOS_CS Cognos Content Server
FILENET_CM Filenet Content Manager
INFOR_ERP_LN ERP Baan
MAXIM Maximo
MDM Master Data Management
SAP SAP Environment
TPM Tivoli Provisioning Manager
WAS Websphere Application Server
WC Websphere Commerce
WP Websphere Portal

If you are a Websphere Commerce nerd like Ember, make sure to read her blog on DB2_Workload and how it can be used for Websphere Commerce.


Michael Krafick is an occasional contributor to db2commerce.com. He has been a production support DBA for over 12 years in data warehousing and highly transactional OLTP environments. He was acknowledged as a top ten session speaker for “10 Minute Triage” at the 2012 IDUG Technical Conference. Michael also has extensive experience in setting up monitoring configurations for DB2 Databases as well as preparing for high availability failover, backup, and recovery. He can be reached at “Michael.Krafick (at) icloud (dot) com”. Linked-in Profile: http://www.linkedin.com/in/michaelkrafick. Twitter: mkrafick

Mike’s blog posts include:
10 Minute Triage: Assessing Problems Quickly (Part I)
10 Minute Triage: Assessing Problems Quickly (Part II)
Now, now you two play nice … DB2 and HACMP failover
Technical Conference – It’s a skill builder, not a trip to Vegas.
Why won’t you just die?! (Cleaning DB2 Process in Memory)
Attack of the Blob: Blobs in a Transaction Processing Environment
Automatic Storage Tablespaces (AST): Compare and Contrast to DMS
DB2 v10.1 Column Masking
Automatic Storage (AST) and DMS
Reloacting the Instance Home Directory
Informational Constraints: Benifits and Drawbacks


Giving Automatic Maintenance a Fair Try

$
0
0

I’m a control freak. I think that control freaks tend to make good DBAs as long as they don’t take it too far. My position for years has been that I would rather control my runstats, reorgs, and backups directly than trust DB2’s automatic facilities. But I also try to keep an open mind. That means that every so often I have to give the new stuff a chance. This blog entry is about me giving automatic maintenance a try. I am NOT recommending it yet, but here’s how I approached it and what I saw.

Environment

The environment I’m working with here is a brand new 10.5 (fixpack 5) database. It uses column-organization for most tables, but has a small subset of tables that must be row-organized. I still refuse to trust my backups to automation – I want them to run at a standard time each day. But for runstats and reorgs, I’m using this article to both look into what to do with the row-organized and the column-organized tables and how to set up controls around what times things happen. The environment I’m working on happens to use HADR.

Database Configuration Parameter Settings

Here’s the configuration I’m starting with for the Automatic maintenance parameters:

 Automatic maintenance                      (AUTO_MAINT) = ON
   Automatic database backup            (AUTO_DB_BACKUP) = OFF
   Automatic table maintenance          (AUTO_TBL_MAINT) = ON
     Automatic runstats                  (AUTO_RUNSTATS) = ON
       Real-time statistics            (AUTO_STMT_STATS) = ON
       Statistical views              (AUTO_STATS_VIEWS) = OFF
       Automatic sampling                (AUTO_SAMPLING) = OFF
     Automatic reorganization               (AUTO_REORG) = ON

I think those are the defaults for a BLU database, though I may have already set AUTO_DB_BACKUP to OFF manually myself. I’m also going to turn auto_stats_views on, in case I create one of those. This is the syntax I use for that:

-bash-4.1$ db2 update db cfg for SAMPLE using AUTO_STATS_VIEWS ON
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.

And now these configuration parameters look like this:

 Automatic maintenance                      (AUTO_MAINT) = ON
   Automatic database backup            (AUTO_DB_BACKUP) = OFF
   Automatic table maintenance          (AUTO_TBL_MAINT) = ON
     Automatic runstats                  (AUTO_RUNSTATS) = ON
       Real-time statistics            (AUTO_STMT_STATS) = ON
       Statistical views              (AUTO_STATS_VIEWS) = ON
       Automatic sampling                (AUTO_SAMPLING) = OFF
     Automatic reorganization               (AUTO_REORG) = ON

Runstats on Row-Organized Tables

The first thing that I know I want to do is to set up profiles for runstats on my row-organized tables.

Setting Profiles

I want to use this syntax for all of my row organized tables in this database:

runstats on table <SCHEMA>.<TABLE> with distribution and detailed indexes all

If I had larger row-organized tables, I might consider using sampling, but my row-organized tables in this database happen to be on the smaller side.

There doesn’t seem to be a way to set a default syntax other than through creating profiles on individual tables. Would love to know in a comment below if I’m missing something there.

To set my profiles, I’m going to use a little scripting trick I like to use when I have to do the same thing for many objects. In this case, I have 106 tables where I want to set the profile identically, so first I create a list of those tables using:

db2 -x "select  substr(tabschema,1,18) as tabschema
        , substr(tabname,1,40) as tabname
from syscat.tables
where   tableorg='R'
        and tabschema not like ('SYS%')
with ur" > tab.list

This creates a file called tab.list that has only the names of my tables – the -x on the command ensures that column headings and the summary row I have telling me how many rows are not returned as a part of the query.

Next, I loop through that list with a one-line shell script:

cat tab.list |while read s t; do db2 connect to bcudb; db2 -v "runstats on table $s.$t with distribution and detailed indexes all set profile"; db2 connect reset; done |tee stats.profile.out

Note that I could have used “set profile only” if I didn’t also want to actually do runstats on these tables, but in my case, I wanted to both do the runstats and set the profile. I then checked stats.profile.out for any failures with this quick grep:

 cat stats.profile.out |grep SQL |grep -v DB20000I |grep -v "SQL authorization ID   = DB2INST1"

Everything was successful.

Setting up a schedule

I don’t want my runstats to kick off any old time they feel like it. I want to restrict them to run between 1 am and 6 am daily. To do this, I need to set up an automatic maintenance policy. There are samples that I can start with in $HOME/sqllib/samples/automaintcfg.

I first made a copy of DB2MaintenanceWindowPolicySample.xml, renaming it and moving it to a working directory. I ensured my new file contained these lines:

<DB2MaintenanceWindows
xmlns="http://www.ibm.com/xmlns/prod/db2/autonomic/config">
 <OnlineWindow Occurrence="During" startTime="01:00:00" duration="5">
   <DaysOfWeek>All</DaysOfWeek>
   <DaysOfMonth>All</DaysOfMonth>
   <MonthsOfYear>All</MonthsOfYear>
 </OnlineWindow>
</DB2MaintenanceWindows>

I don’t want to set an offline window at this time, because I don’t have one. The sample file has great information on how to configure different scenarios. While there is no option in the xml file to specify a database or different databases, I’m setting the policy with a command against a database, so I named the file with the database name in it and keep it in a place I can easily find it later so that I can change it if need be.

By default, the online window is 24/7.

Now that I have an XML file that will do what I want, I can set that as the policy using the AUTOMAINT_SET_POLICYFILE procedure, like this:

-bash-4.1$ db2 "call sysproc.automaint_set_policyfile( 'MAINTENANCE_WINDOW', 'DB2MaintenanceWindowPolicyBCUDB.xml' )"
SQL1436N  Automated maintenance policy configuration file named

Well, oops, that didn’t work so well. I learned that the xml file you want to use must be in $HOME/sqllib/tmp. ALSO, it must be readable by the fenced user ID. With the way I have it set up (with the fenced user id in the primary group of my instance id), this is what I had to do to make that work:

-bash-4.1$ cp DB2MaintenanceWindowPolicyBCUDB.xml $HOME/sqllib/tmp
-bash-4.1$ chmod 740 $HOME/sqllib/tmp/DB2MaintenanceWindowPolicyBCUDB.xml

I was then able to successfully call the stored procedure:

-bash-4.1$ db2 "call sysproc.automaint_set_policyfile( 'MAINTENANCE_WINDOW', 'DB2MaintenanceWindowPolicyBCUDB.xml' )"

  Return Status = 0

When DB2 reads the file in, it is not depending on that file to exist forever. It is storing the information from the file in the database. You can use the AUTOMAINT_GET_POLICYFILE and AUTOMAINT_GET_POLICY stored procedures to pull that information back out. Remember that there is only one policy for each of the automatic maintenance categories, so it is best to first get the policy, change it, and then set it, so you do not accidentally overwrite what is already there.

Phew. Ok, that’s what I had to do to set things up for my row-organized tables. My column organized tables will also get runstats by default, and for the sake of trying it, I’m going to go with the defaults there and see what happens. And by see what happens, I mean I’ll be querying up a storm to see what DB2 is doing.

Reorgs

This database does not have an offline maintenance window. So I need to configure online reorgs to occur. Much like with runstats, I’m going to let the BLU tables go for a while and see if the the hype from IBM about just letting DB2 take care of it is really all that. The only reorgs there are for space reclaimation anyway. But for my row-organized tables, I want to make sure they’re taken care of.

Man, was I disappointed to discover that inplace/notruncate reorgs are STILL not supported as a part of automatic maintenance. This means that I cannot do table reorgs through DB2’s automation facilities … off to re-write my reorg script for yet another employer, I guess.

I’m trying to see if DB2 can at least manage my index reorgs for me online, though, with this syntax in my file:

<DB2AutoReorgPolicy
xmlns="http://www.ibm.com/xmlns/prod/db2/autonomic/config">
 <ReorgOptions dictionaryOption="Keep" indexReorgMode="Online"  useSystemTempTableSpace="false" />
 <ReorgTableScope maxOfflineReorgTableSize="52">
  <FilterClause />
 </ReorgTableScope>
</DB2AutoReorgPolicy>

And, of course, implementing that file with:

-bash-4.1$ cp DB2AutoReorgPolicyBCUDB.xml $HOME/sqllib/tmp
-bash-4.1$ chmod 740 $HOME/sqllib/tmp/DB2AutoReorgPolicyBCUDB.xml
-bash-4.1$ db2 "call sysproc.automaint_set_policyfile( 'AUTO_REORG', 'DB2AutoReorgPolicyBCUDB.xml' )"

  Return Status = 0

I think the combination of that and no offline reorg window defined will get me what I want on the index side anyway.

Analyzing Package Cache Size

$
0
0

Note: updated 7/21 to reflect location of the package cache high water mark in the MON_GET* table functions

I have long been a fan of a smaller package cache size, particularly for transaction processing databases. I have seen STMM choose a very large size for the package cache, and this presents several problems:

  • Memory used for the package cache might be better used elsewhere
  • A large package cache makes statement analysis difficult
  • A large package cache may be masking statement issues – the proper use of parameter markers

Parameter Markers

Parameter markers involve telling DB2 that the same query may be executed many times with slightly different values, and that DB2 should use the same access plan, no matter what the values supplied are. This means that DB2 only has to compile the access plan once, rather than doing the same work repeatedly. However, it also means that DB2 cannot make use of distribution statistics to compute the optimal access plan. That means that parameter markers work best for queries that are executed frequently, and for which the value distribution is likely to be even or at least not drastically skewed.

The use of parameter markers is not a choice that the DBA usually gets to make. It is often a decision made by developers or even vendors. Since it is not an across-the-board best practice to use parameter markers, there are frequently cases where the wrong decisions are made. There are certainly queries and data sets where parameter markers will make things worse.

At the database level, we can use the STMT_CONC database configuration parameter (set to LITERALS) to force the use of common access plans for EVERYTHING. This is not optimal for the following reasons:

  • There are often some places where the value will always be the same, and in those places SQL would benefit more from a static value.
  • The SQL in the pacakage cache will essentially never show static values used, which can be difficult when troubleshooting.
  • With uneven distribution of data, performance of some SQL may suffer.
  • There have been APARs about incorrect data being returned.

If you have interaction with developers on a deep and meaningful level, proper use of parameter markers is the best choice.

Parameter markers show up as question marks in SQL in the package cache. This statement uses parameter markers:

Select booking_num from SAMPLE.TRAILER_BOOKING where trailer_id = ?

Statement substitutions done by the statement concentrator use :LN, where N is a number representing the position in the statement. This statement shows values affected by the statement concentrator:

select count(*) from event where event_id in ( select event_id from sample.other_table where comm_id=:L0 ) and who_entered != :L1

Sizing the Package Cache

I’ve said that I don’t trust STMM to make the best choices for the package cache. As a result, I recommend setting a static value. How do I come up with the right value?

I often start by setting the PCKCACHESZ database configuration parameter to 8192 or 16384, and tune it upwards until I stop seeing frequent package cache overflows. A package cache overflow will write messages like this to the DB2 diagnostic log:

xxxx-xx-xx-xx.xx.xx.xxxxxx+xxx xxxxxxxxxxxxxx     LEVEL: Event
PID     : xxxxxxx              TID  : xxxxx       PROC : db2sysc
0
INSTANCE: db2             NODE : 000         DB   : SAMPLE
APPHDL  : 0-xxxxx              APPID:
xx.xxx.xxx.xx.xxxxx.xxxxxxxxxxx
AUTHID  : xxxxxxxx

EDUID   : xxxxx                EDUNAME: db2agent (SAMPLE) 0
FUNCTION: DB2 UDB, access plan manager, sqlra_cache_mem_please,
probe:100
MESSAGE : ADM4500W  A package cache overflow condition has
occurred. There is
          no error but this indicates that the package cache has
exceeded the
          configured maximum size. If this condition persists,
you should
          perform additional monitoring to determine if you need
to change the
          PCKCACHESZ DB configuration parameter. You could also
set it to
          AUTOMATIC.
REPORT  : APM : Package Cache : info
IMPACT  : Unlikely
DATA #1 : String, 274 bytes
Package Cache Overflow
memory needed             : 753
current used size (OSS)   : 15984666
maximum cache size (APM)  : 15892480
maximum logical size (OSS): 40164894
maximum used size (OSS)   : 48562176
owned size (OSS)          : 26017792
number of overflows       : xxxxx

I address these usually by increasing the package cache by 4096 until they are vastly less frequent. This could still be a considerable size if your application does not make appropriate use of parameter markers.

To look at details of your package cache size, you can look at this section of a database snapshot:

Package cache lookups                      = 16001443673
Package cache inserts                      = 4180445
Package cache overflows                    = 0
Package cache high water mark (Bytes)      = 777720137

I’m a bit frustrated that the package cache high water mark doesn’t seem to be in the MON_GET* functions. I’m going to need that before they discontinue the snapshot monitor. To get the high water mark for the package cache, you can use this query on 9.7 and above (thanks to Paul Bird’s twitter comment for pointing me to this):

select memory_pool_used_hwm
from table (MON_GET_MEMORY_POOL(NULL, CURRENT_SERVER, -2)) as mgmp 
where memory_pool_type='PACKAGE_CACHE' 
with ur

MEMORY_POOL_USED_HWM
--------------------
                 832

You can use that value to see how close to the configured maximum size (PCKCACHESZ) the package cache has actually come. In this particular database, the package cache size is 190000 (4K pages). In bytes that would be 778,240,000. That means in this case that the package cache has nearly reached the maximum at some point. But you can tell from the value of package cache overflows that it has not attempted to overflow the configured size.

The numbers above also allow me to calculate the package cache hit ratio. These numbers are also available in MON_GET_WORKLOAD on 9.7 and above or MON_GET_DATABASE on 10.5. The package cache hit ratio is calculated as:

100*(1-(package cache inserts/package cache lookups))

With the numbers above, that is:

100*(1-(4180445/16001443673))

or 99.97%

You do generally want to make sure your package cache hit ratio is over 90%.

In addition to these metrics, you can also look at what percentage of time your database spends on compiling SQL. This can be computed over a specific period of time using MONREPORT.DBSUMMARY. Look for this section:

  Component times
  --------------------------------------------------------------------------------
  -- Detailed breakdown of processing time --

                                      %                 Total
                                      ----------------  --------------------------
  Total processing                    100               10968

  Section execution
    TOTAL_SECTION_PROC_TIME           80                8857
      TOTAL_SECTION_SORT_PROC_TIME    17                1903
  Compile
    TOTAL_COMPILE_PROC_TIME           2                 307
    TOTAL_IMPLICIT_COMPILE_PROC_TIME  0                 0
  Transaction end processing
    TOTAL_COMMIT_PROC_TIME            0                 76
    TOTAL_ROLLBACK_PROC_TIME          0                 0
  Utilities
    TOTAL_RUNSTATS_PROC_TIME          0                 0
    TOTAL_REORGS_PROC_TIME            0                 0
    TOTAL_LOAD_PROC_TIME              0                 0

You generally want to aim for a compile time percentage of 5% or less. Remember that MONREPORT.DBSUMMARY only reports data over the interval that you give it, with a default of 10 seconds, so you want to run this over time and at many different times before making a decision based upon it.

Summary

A properly sized package cache is important to database performance. The numbers and details presented here should help you find the appropriate size for your system.

Issues with STMM

$
0
0

I thought I’d share some issues with STMM that I’ve seen on Linux lately. I’ve mostly been a fan of STMM, and I still am for small environments that are largely transaction processing and have only one instance on a server.

Here are the details of this environment. The database is a small analytics environment. It used to be a BCU environment that was 4 data nodes and one coordinator node on 9.5. The database was less than a TB, uncompressed. There were also some single-partition databases for various purposes on the coordinator node. I’ve recently migrated it to BLU – 10.5 on Linux. The users are just starting to make heavier use of the environment, though I largely built and moved some data about 6 months ago. The client does essentially a full re-load of all data once a month.

The new environment is two DB2 instances – one for the largely BLU database, and one for a transaction processing database that replaces most of the smaller databases from the coordinator node. Each instance has only one database. The server has 8 CPUS and about 64 GB of memory – the minimums for a BLU environment.

First Crash

The first crash we saw was both instances going down within 2 seconds of each other. The last message before the crash looked like this:

2015-08-06-17.58.02.253956+000 E548084503E579        LEVEL: Severe
PID     : 20773                TID : 140664939472640 PROC : db2wdog
INSTANCE: db2inst1             NODE : 000
HOSTNAME: dbserver1
EDUID   : 2                    EDUNAME: db2wdog [db2inst1]
FUNCTION: DB2 UDB, base sys utilities, sqleWatchDog, probe:20
MESSAGE : ADM0503C  An unexpected internal processing error has occurred. All
          DB2 processes associated with this instance have been shutdown.
          Diagnostic information has been recorded. Contact IBM Support for
          further assistance.

2015-08-06-17.58.02.574134+000 E548085083E455        LEVEL: Error
PID     : 20773                TID : 140664939472640 PROC : db2wdog
INSTANCE: db2inst1             NODE : 000
HOSTNAME: dbserver1
EDUID   : 2                    EDUNAME: db2wdog [db2inst1]
FUNCTION: DB2 UDB, base sys utilities, sqleWatchDog, probe:8959
DATA #1 : Process ID, 4 bytes
20775
DATA #2 : Hexdump, 8 bytes
0x00007FEF1BBFD1E8 : 0201 0000 0900 0000                        ........

2015-08-06-17.58.02.575748+000 I548085539E420        LEVEL: Info
PID     : 20773                TID : 140664939472640 PROC : db2wdog
INSTANCE: db2inst1             NODE : 000
HOSTNAME: dbserver1
EDUID   : 2                    EDUNAME: db2wdog [db2inst1]
FUNCTION: DB2 UDB, base sys utilities, sqleCleanupResources, probe:5475
DATA #1 : String, 24 bytes
Process Termination Code
DATA #2 : Hex integer, 4 bytes
0x00000102

2015-08-06-17.58.02.580890+000 I548085960E848        LEVEL: Event
PID     : 20773                TID : 140664939472640 PROC : db2wdog
INSTANCE: db2inst1             NODE : 000
HOSTNAME: dbserver1
EDUID   : 2                    EDUNAME: db2wdog [db2inst1]
FUNCTION: DB2 UDB, oper system services, sqlossig, probe:10
MESSAGE : Sending SIGKILL to the following process id
DATA #1 : signed integer, 4 bytes
...

The most frequent cause of this kind of error in my experience tends to be memory pressure at the OS level – the OS saw that too much memory was being used, and instead of crashing itself, it chooses the biggest consumer of memory to kill. On a DB2 database server, this is almost always db2sysc or another DB2 process. I still chose to open a ticket with support, to get confirmation on this and see if there was a known issue.

IBM support pointed me to this technote, confirming my suspicions: http://www-01.ibm.com/support/docview.wss?uid=swg21449871. They also recommended “have a Linux system administrator review the system memory usage and verify that there is available memory, including disk swap space. Most Linux kernels now allow for the tuning of the OOM-killer. It is recommended that a Linux system administrator perform a review and determine the appropriate settings.” I was a bit frustrated with this response as this box runs on a PureApp environment and runs only DB2. The solution is to tune the OOM-killer at the OS level?

While working on the issue I discovered that I had neglected to set INSTANCE_MEMORY/DATABASE_MEMORY to fixed values, as is best practice on a system with more than one DB2 instance when you’re trying to use STMM. So I set them for both instances and databases, allowing the BLU instance to have most of the memory. I went with the idea that this crash was basically my fault for not better limiting the two DB2 instances on a box. Though I wish STMM would play better for multiple instances.

Second Crash

Several weeks later, I had another crash, though this time only of the BLU instance, not of the other instance. It was clearly the same issue. I re-opened the PMR with support, and asked for help identifying what tuning I needed to do to keep these two instances from stepping on each other. IBM support again confirmed that it was a case of the OS killing DB2 due to memory pressure. This time, they recommended setting the Linux kernel parameter vm.swappiness to 0. While I worked on getting approvals for that, I tweeted about it. The DB2 Knowledge Center does recommend it be set to 0. I had it set to the default of 60.

Resolution

Scott Hayes reached out to me on twitter because he had recently seen a similar issue. After a discussion with him about the details, I decided to implement a less drastic setting for vm.swappines, and to instead abandon the use of STMM. I always set the package cache manually anyway. I had set catalog cache manually. Due to problems with loads, I had already set the utility heap manually. In BLU databases, STMM cannot tune sort memory areas. All of this meant that the only areas STMM was even able to tune in my BLU database were DBHEAP, LOCKLIST, and the buffer pools. I looked at what the current settings were and set these remaining areas to just below what STMM had them at. I have already encountered one minor problem – apparently STMM had been increasing the DBHEAP each night during LOADs, so when they ran LOADs the first night, they failed due to insufficient DBHEAP. That was easy to fix, as the errors in the diagnostic log specified exactly how much DBHEAP was needed, so I manually increased the DBHEAP. I will have to keep a closer eye on performance tuning, but my monitoring already does things like send me an email when buffer pool hit ratios or other KPIs are off, so that’s not much of a stretch for me.

Registry Variables and DB2_WORKLOAD=WC

$
0
0

I was happy to see the new workload registry variable in Commerce 7/DB2 9. Mostly out of laziness – it requires fewer variables to be set manually, but it also ensures that nothing major is missed (I imagine they may come up with more that won’t be added to the workload registry variable in real time). I had a whole argument with whichever part of IBM a client brought in to do a load-test performance review because I had set one of the paramters to “ON” instead of “YES”. I recently ran across this statement in the Info Center:

Note: If a registry variable requires Boolean values as arguments,
the values YES, 1, and ON are all equivalent and the values NO, 0, and OFF
are also equivalent. For any variable, you can specify any of the appropriate
equivalent values.

I will surely be quoting this and linking it in any future such disagreements.

If I look at what is set, it is mostly what we set by hand before, with a few additions:

[i] DB2_OPT_MAX_TEMP_SIZE=10240 [O]
[i] DB2_WORKLOAD=WC
[i] DB2_SKIPINSERTED=YES [O]
[i] DB2_OPTPROFILE=YES [O]
[i] DB2_USE_ALTERNATE_PAGE_CLEANING=ON
[i] DB2_INLIST_TO_NLJN=YES [O]
[i] DB2_MINIMIZE_LISTPREFETCH=YES [O]
[i] DB2_REDUCED_OPTIMIZATION=INDEX,JOIN,NO_SORT_MGJOIN,JULIE [O]
[i] DB2_EVALUNCOMMITTED=YES_DEFERISCANFETCH [O]
[i] DB2_ANTIJOIN=EXTEND [O]
[i] DB2_SKIPDELETED=YES [O]

I would like to hear the story behind DB2_REDUCED_OPTIMIZATION being set to “Julie” – what, was that someone’s girlfriend? Actually, that’s the parameter that has me most interested out of all of them overall(and most worried too).

I’m also interested in diggging further into the use of optimization profiles and how Commerce 7 is using them. I do wory a bit that they may be locking in access methods that may not be appropriate for every database size/distribution.

I would like to find a complete reference on Commerce’s thoughts on each variable and why they work for Commerce. I’m just not a “trust me, it works” kind of person when it comes to these things.

HADR

$
0
0

What is HADR?

HADR is DB2’s implementation of log shipping. Which means it’s a shared-nothing kind of product. But it is log shipping at the Log Buffer level instead of the Log File level, so it can be extremely up to date. It even has a Synchronous mode that would guarantee that committed transactions on one server would also be on another server. (in years of experience on dozens of clients, I’ve only ever seen NEARSYNC used) It can only handle two servers (there’s no adding a third in), and is active/warm spare – only with 9.7 Fixpack 1 and later can you do reads on the standby and you cannot do writes on the standby.

How much does it cost?

As always verify with IBM because licensing changes by region and other factors I’m not aware of. But generally HADR is included with DB2 licensing – the rub is usually in licensing DB2 on the standby server. Usually the standby server can be licensed at only 100 PVU, which is frequently much cheaper than full DB2 licensing. If you want to be able to do reads on the standby, though, you’ll have to go in for full licensing. Usually clients run HADR only in production, though I have seen a couple lately doing it in QA as well to have a testing ground.

What failures does it protect against?

HADR protects against hardware failures – CPU, disk, memory and the controllers and other hardware components. Tools like HACMP and Veritas use a shared-disk implementation, so cannot protect against disk failure. I have seen both SAN failures and RAID array (the whole array) failures, so it may seem like one in a million, but even the most redundant disks can fail. It can also be used to facilitate rolling hardware maintenance and rolling FixPacks. You are not guaranteed to be able to keep the database up during a full DB2 version upgrade. It must be combined with other (included) products to automatically sense failures and fail over.

What failures does it not protect against?

HADR does not protect against human error, data issues, and HADR failures. If someone deletes everything from a table and commits the delete, HADR is not going to be able to recover from that. It is not a replacement for a good backup and recovery strategy. You must also monitor HADR – I treat HADR down in production as a sev 1 issue where a DBA needs to be called out of bed to fix it. I have actually lost a production raid array around 5 am when HADR had gone down around 1 am. Worst case scenarios do happen.

How to set it up

HADR is really not too difficult to set up on it’s own. Configuring automatic failover is a bit more difficult, though DB2 has made it significantly easier in 9.5 and above with the introduction of bundled TSA and the haicu tool. I’m not going to list every detail here because there are half a dozen white papers out there on how to set it up. The general idea is:

1. Set the HADR parameters on each server

HADR local host name                  (HADR_LOCAL_HOST) = your.primary.hostname 
HADR local service name                (HADR_LOCAL_SVC) = 18819
HADR remote host name                (HADR_REMOTE_HOST) = your.secondary.hostname
HADR remote service name              (HADR_REMOTE_SVC) = 18820
HADR instance name of remote server  (HADR_REMOTE_INST) = inst1
HADR timeout value                       (HADR_TIMEOUT) = 120
HADR log write synchronization mode     (HADR_SYNCMODE) = NEARSYNC
HADR peer window duration (seconds)  (HADR_PEER_WINDOW) = 120

2. Set the Alternate Servers on the Primary and the standby (for Automatic Client Reroute)

3. Set db configuration parameters INDEXREC to RESTART and LOGINDEXBUILD to ON

4. Take a backup (preferably Offline) of the database on the primary server

5. Restore the database on the standby server, leaving it in rollforward pending state

6. Start HADR on the standby

7. Start HADR on the primary

8. Wait 5 minutes and check HADR status

9. Run db2haicu to set up TSA for automated failover

10. Test multiple failure scenarios at the app and database level

For chunks of this, your database will be unavailable. There are also a number of inputs you need to have ready for running db2haicu, and you will need ongoing sudo authority to execute at least one TSA related command.

Remember that the primary and standby servers should be as identical as possible – filesystems, hardware, and software.

Some clients also neglect step #10 – testing of failovers. This is an important step to make sure you really can failover. It is possible to think you have everything set up right, do a failover and then not have it work properly from the application’s perspective.

Gotchas

This section represents hours spent troubleshooting different problems or recovering from them. I hope it can help someone find an issue faster.

HADR is extremely picky about its variables. They must be exactly right with no typos, or HADR will not work. I have, on several occasions had numbers reversed or the instance name off, and spent a fair amount of time looking for the error before finding it. Because of this, it can help if you have another dba look over the basics if things aren’t working on setup. HADR is also picky on hosts file and/or db2nodes.cfg set up, and in some cases you may end up using an IP address in the db cfg parameters instead of a hostname.

HADR also sometimes fails after it tells you it has successfully started, so you must check the status after you start it.

Occasionally HADR doesn’t like to work from an Online backup, so an Offline one will be required. I have one note about it not going well with a compressed backup, but that was years ago, and I frequently used compressed backups without trouble.

HADR does not copy things that aren’t logged – so it is not a good choice if you have non-logged LOBs or if you do non-recoverable loads. If you are using HADR and you do a non-recoverable load, you have to take a backup on the primary and restore it into the standby – if you don’t, any table with a non-recoverable load will not be copied over, nor will future changes, and if you go to failover, then you will not be able to access that table. For this reason, I wouldn’t use it in a scenario where you don’t have good control over data being loaded into the database. If you do run into that, then you have to backup your primary database, restore it into your standby database, and start HADR.

HADR does go down sometimes without warning – so you must monitor it using whatever monitoring tools you have, and ensure that you respond very quickly when it goes down. I use db2pd to monitor(parsing output with scripts), partially because db2pd works when other monitoring tools hang. We look at ConnectStatus, State, and LogGapRunAvg.

On reboot, HADR comes up with database activation. Which means it usually comes up just fine on your primary database, but not on your standby database (no connections to prompt activation). So you’ll generally need to manually start hadr on your standby after a reboot. The primary database will not allow connections on activation until after it can communicate with the standby. This is to prevent a DBA’s worst nightmare – ‘Split Brain’. DB2’s protections against split-brain are pretty nifty. But this means that if you reboot both your primary and your standby at the same time and your primary comes up first, then your primary will not allow any connections until your standby is also up. This can be very confusing the first time or two that you see it. You can manually force the primary to start if you’re sure that the standby is not also up and taking transactions. Or if you’re rebooting both, just do the standby first and do the primary after the standby is back up and activated. If you need your standby down for a while, then stop HADR before you stop the servers. I would recommend NOT stopping HADR automatically on reboot, because the default behavior protects you from split-brain.

What is split-brain? It is simply both your primary and standby databases thinking they are the primary and taking transactions – getting you into a nearly impossible to resolve data conflict.

You must keep the same ids/groups on the primary and standby database servers. I’ve seen a situation on initial set up where the id that Commerce was using to connect to the database was only on the primary server, and not on the standby server, and thus on failover, the database looked fine, but Commerce could not connect.

You also want to be aware of any batch-jobs, data loads, or even scheduled maintenance like runstats or backups – when you fail over, you’ll need to run these on the other database server. Or you can also run them from a client that will get the ACR value and always point to the active database server. Frequently we don’t care which database server the database is running on, and may have it on what was initially the “standby” for months at a time.

Overall, I really like HADR and it’s ease of administration. The level of integration for TSA in 9.5/9.7 is great.

Managing db2 transaction log files

$
0
0

Logging method

There are two methods of logging that DB2 supports: Circular and Archive. I believe Oracle has similar modes.

Circular

To my extreme disgust, the default that Commerce uses if you don’t change anything is Circular logging. Circular logging is more often appropriate for databases where you don’t care about the data (seen it for databases supporting Tivoli and other vendors) or for a Data Warehousing and Decision Support databases where you have extremely well defined data loading processes that can easily be re-done on demand. You must also be willing to take regular outages for database backups because Circular logging does not allow you to take online backups. Circular logging also does not allow you to rollforward or back through transaction logs to reach a point in time – any restores are ONLY to the time a backup was taken.

On every new build, I move away from circular logging. I just don’t find it appropriate for a OLTP database, where your requirement often include very high availability and the ability to recover from all kinds of disasters with no data loss.

Archive

So why, then isn’t archive logging the default? Well, it requires proper management of transaction log files. Which can really get you in trouble if you don’t know what you’re doing. If you compress or delete an active transaction log, you will crash your database and have to restore from a backup. I’ve seen it happen, and it’s not fun. And the highest freqency of OS level backups you’re willing to do should be applied to the directories holding transaction log files.

I ensure that my archive logs are always on a separate path from the active ones so I and whoever gets paged out when a filesystem is filling up can easily see which is which.

Personally, I use scripts to manage my trasaction log files. I actually do most of it with my backup script. How long you keep them depends on your restore requirements and your overall backup/restore strategy. I also use a simple cron job to find files in the archive log path older than a certain time frame (1 day or 3 days is most common) and compress them. I hear that a nice safe way to delete logs is the prune logs command, but I never got used to it.

This is one of the areas where it is critical for DBA’s to have an excruciatingly high level of attention to detail.

Logging settings

Ok, ready for the most complicated part?

All the settings discussed are in the db cfg.

LOGRETAIN

So the most important setting here is LOGRETAIN. If set to ‘NO’, then you have circular logging. If it is set to ‘Recovery’ then you have Archive logging. To enable archive logging, you simply update this parameter.

LOGARCHMETH1

Second in my mind is LOGARCHMETH1. This parameter specifies the separate path for your archive logs to be sent to. It can be a location on DISK or TSM. Do not leave it set to ‘LOGRETAIN’.

http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.config.doc/doc/r0011448.html

WTH is this USEREXIT thing?

I undoubtedly have some newer DBAs wondering about this. The LOGARCHMETH1 parameter and others that dictate the location of archive logs was only introduced in db28 (or was it 7?). Before that, we had these nasty things called userexit programs that we had to locate C compilers to compile and be aware of the location of the uncompiled versions to make changes if needed. And the compiled file had to be in the right place with the right permissions. Really, I hated working with them. But the functionality is still in DB2 to use them. I imagine they could do things you can’t do natively, but the parameters are so good that it’d be a rare situation that you need them.

LOGFILSIZ

This is the size of each transaction log file. Generally my default for Commerce is 10000 (which I think Commerce itself actually sets on instance creation), but I’ve gone higher – it’s not unusual to go up to 40,000 while data is being loaded or for stagincopies.

LOGPRIMARY

This determines the number of log files of the size LOGFILSZ that compose the database’s active log files. These are all created on database activation (which happens on first connection), so you don’t want to go too large. But you do want to generally have the space here to handle your active logs. Most Commerce databases do well at around 12.

LOGSECOND

This determines the number of additional active logs that can be allocated as needed. LOGPRIMARY + LOGSECOND cannot exceed 255. The nice thing about LOGSECOND is that these are not allocated on database activation, but only as needed. The other awesome thing here is that they can be increased online – one of the few logging parameters that can be. I usually start with 50, but increase if there’s a specific need for more. Remember, these should not be used on an ongoing basis – just to handle spikes.

All the others

So there are all kinds of nifty things you can do with logging. Infinite logging, mirrored loging, logging to a raw device, etc. So I’m not going to cover all the logging parameters there are in this post.

Potential issues

Deleting or compressing an active log file

The best case if you delete or compress an active log file is that DB2 is able to recreate it. This may affect your ability to take online backups. The worst (and more likely) case is that your database ceases functioning and you have to restore from backup. Keep your active and archive logs in separate directories to help prevent this, and educate anyone who might try to alleviate a filesystem full. If you do get an error on an online backup referencing the inability to include a log file, take an offline backup just as soon as you can – you will be unable to take online backups until you do.

Filling up a filesystem due to not managing log files

If your archive log filesystem is separate and fills up, it doesn’t hurt anything. If the filesystem your active log path is on fills up, your database will be inaccessible until you clear up the filesystem full. The moment the filesystem is no longer full, the database will function, so there is no need to restore. I recommend monitoring for any filesystems involved in transaction logging.

Deleting too many log files and impacting recovery

If you’re on anything before DB2 9.5, make absolutely sure that you use the “include logs” keyword on the backup command. If you don’t, you may end up with a backup that is completely useless, because you MUST have at least one log file to restore from an online backup. When you delete log files, keep in mind your backup/recovery strategy. There’s very little worse than really needing to restore but being unable to do so because you’re missing a file. I recommend backing up your transaction logs to tape or through other OS level methods as frequently as you can.

Deleting recent files and impacting HADR

Sometimes HADR needs to access archive log files – especially if HADR is behind and needs to catch up. If you run into this situation, you have to re-set-up HADR using a database restore. If you’re using HADR, it is important to monitor HADR so you can catch failures as soon as possible and reduce the need for archive logs.

Log files too small

Tuning the size of your log files may be a topic for another post, but I’ll cover the highlights. Large deletes are the most likely to chew through everything you’ve got. The best solution is to break up large units of work into smaller pieces, especially deletes. Where that’s not possible (ahem, stagingcopy), you’ll need to increase any of LOGFILSZ, LOGPRIMARY, or LOGSECOND. Only LOGSECOND can be changed without recycling the database.

Log file saturation

This one confuses the heck out of new DBAs. You get what looks like a log file full, yet the disk is not full and a snapshot says there’s plenty of log space available. The problem here is that with archive logging, log files and each spot in those log files must be used sequentially – even if there are things that have already been committed. Normally the database is rolling through the logs, with the same number of files active at once, but constantly changing which files.

Sometimes an old connection is sitting out there hanging on to a page in the log file with an uncommitted unit of work. Then the connection becomes idle and stays that way, sometimes for days. Then DB2 gets to the point where it has to open another log file, and it can’t because that would be more than it is allowed to allocate. So it throws an error that looks pretty similar to log file full. In that case, you must force off the old idle connection. Details are written to the diag log, and you can also use a database snapshot to get the id of the connection holding the oldest log file.

This never happens to Commerce’s own connections, in my experience. It is usually a developer’s connection from what I’ve seen in Commerce databases. Commerce when functioning normally rarely has a connection with more than 5 minutes of idle time. So I like to have a db2 governor running that forces off connections that are IDLE for more than 4 hours.

Locking Parameters

$
0
0

So I thought I’d write a post covering locking parameters. This is by no means a comprehensive coverage of isolation levels and locking, but more of a practically oriented guide to the parameters available in DB2 that relate to locking.

LOCKTIMEOUT

This database configuration parameter specifies the time in seconds that a connection will wait for a needed lock before returning an error to the user.

Locktimeout is actually powerful functionality for OLTP/e-commerce databases. The idea is that an application should either do its work or fail and get out of the way. DB2 has a bit of a bad reputation for concurrency. I tend to think that this is because DB2 favors data integrity over concurrency, but I’m sure an Oracle dba would disagree with that characterization. For WebSphere Commerce or any OLTP/e-commerce databases, LOCKTIMEOUT should be set to a value between 30 and 90 seconds. Other types of databases may have other appropriate values.

Be exceedingly careful with the default of -1, though. -1 means “wait forever”, and this has a couple of implications. The first is that this “wait forever” may appear to the end user to be a hang – a query that is simply not returning results. The other one is that you can end up with some interesting lock chaining scenarios. The main problem is not always that one connection is waiting on one other connection – the problem tends to be that the waiting connection also has a dozen or a hundred other locks, and other connections may pile up behind those locks. db2top even has an option from the locks screen to list out lock chains. I’ve seen some ESB datbases (where the ESB application holds a lock on the SIBOWNER table continuously) where runstats and/or automatic runstats evaluation have piled up behind the application locks over the course of weeks to the point where the database finally becomes unusable, and the various runstats have to be manually cancelled. This does not occur if  LOCKTIMEOUT is set to a value.

To check your current value:

$ db2 get db cfg for <db_name> |grep LOCKTIMEOUT
 Lock timeout (sec)                        (LOCKTIMEOUT) = 60

The database must be recycled for changes to LOCKTIMEOUT to take effect.

LOCKTIMEOUT info center entry

LOCKLIST

This database configuration parameter is the size in 4k pages of the area of memory that DB2 uses to store locking information.

Contrary to some beliefs, changes to this parameter will not help locktimeout or deadlock issues unless there are also lock escalations are also occurring. I have to explain this at least a couple of times a year to one client or another. Generally the only time you will tune this is if you do see lock escalation. Lock escalations are noted in the DB2 diagnostic log and also in database snapshots. This parameter can be designated as one of the ones that is automatically tuned by STMM. If you are not allowing STMM to automatically tune it, I do recommend going higher than the default to start – I usually start with 4800 when manually tuning.

Each lock takes either 128 or 256 bytes, depending on whether other locks are held on the same object.

To check your current value:

$ db2 get db cfg for <db_name> |grep LOCKLIST
 Max storage for lock list (4KB)              (LOCKLIST) = 4800

One nice thing is that any changes to this parameter will take place immediately – no need to recycle the database or the instance for it to take effect.

LOCKLIST info center entry

MAXLOCKS

This database configuration parameter specifies the maximum percentage of the LOCKLIST that a single connection can use. This is designed to help prevent all of the LOCKLIST being consumed by a singe connection. It is something that is more likely to be tuned on previous versions of DB2 where LOCKLIST could not be set to automatically increase to avoid locking issues, and may still need tuning, particularly on ODS or DW databases where memory is constrained. Like LOCKLIST, this parameter can be set to automatic.

To check your current value:

$ db2 get db cfg for <db_name> |grep MAXLOCKS
 Percent. of lock lists per application       (MAXLOCKS) = 10

Like LOCKLIST, this parameter can be changed online with changes taking place immediately with no recycle needed.

MAXLOCKS info center entry

DLCHKTIME

This database configuration parameter specifies the frequency (in milliseconds) that db2 checks for deadlocks. The default is 10,000 ms (1o seconds). I don’t think I’ve ever seen this one changed.

To check your current value:

$ db2 get db cfg for wc005d04 |grep DLCHKTIME
 Interval for checking deadlock (ms)         (DLCHKTIME) = 10000

If you do end up having to change it, you can change it immediately without a database or instance recycle.

DLCHKTIME info center entry

DB2_EVALUNCOMMITTED

This DB2 registry parameter is one of three that changes how db2 locks rows. As such it is dangerous and should only be used if your application explicitly supports its use. WebSphere Commerce supports the use of all three. Be careful of DB2 instances where you have more than one database – this is set at the instance level, and you’ll want to make sure that all applications accessing any database in the instance support these parameters.

This DB2 registry parameter allows db2 to evaluate rows to see if they meet the conditions of the query BEFORE locking the row when using RS or CS isolation levels. The normal behavior for these isolation levels would be to lock the row before determining if it matched.

To check your current value:

$ db2set -all |grep DB2_EVALUNCOMMITTED
[i] DB2_EVALUNCOMMITTED=YES

If nothing is returned from this command, then the parameter is not set. The entire db2 instance must be recycled (db2stop/db2start) for changes to this parameter to take effect.

 

 

 

 

DB2_EVALUNCOMMITTED info center entry

DB2_SKIPDELETED

This DB2 registry parameter is one of three that changes how db2 locks rows. As such it is dangerous and should only be used if your application explicitly supports its use. WebSphere Commerce supports the use of all three. Be careful of DB2 instances where you have more than one database – this is set at the instance level, and you’ll want to make sure that all applications accessing any database in the instance support these parameters.

This DB2 registry parameter allows DB2 to skip uncommitted deleted rows during index scans. If it is not set, db2 will still evaluate uncommitted deleted rows during index scans. The normal behavior is for DB2 to evaluate uncommitted deleted rows in indexes until they are actually committed.

To check your current value:

$ db2set -all |grep DB2_SKIPDELETED
[i] DB2_SKIPDELETED=ON

If nothing is returned from this command, then the parameter is not set. The entire db2 instance must be recycled (db2stop/db2start) for changes to this parameter to take effect.

DB2_SKIPDELETED info center entry

DB2_SKIPINSERTED

This DB2 registry parameter is one of three that changes how db2 locks rows. As such it is dangerous and should only be used if your application explicitly supports its use. WebSphere Commerce supports the use of all three. Be careful of DB2 instances where you have more than one database – this is set at the instance level, and you’ll want to make sure that all applications accessing any database in the instance support these parameters.

This DB2 registry parameter allows DB2 to skip uncommitted newly inserted rows. If this parameter is not set, DB2 waits for the inserts to be committed or rolled back before continuing – you can see how this might not be ideal for a database that requires high concurrency. Like the others, this applies to CS and RS isolation levels.

To check your current value:

$ db2set -all |grep DB2_SKIPINSERTED
[i] DB2_SKIPINSERTED=ON

DB2_SKIPINSERTED info center entry

Deadlock Event Monitor

While not a parameter per se, having your deadlock event monitor properly set up is important to deadlock analysis. DB2 starting with version 8 comes with a detailed deadlock event monitor enabled by default. This is actually awesome because it means that in many cases, we have the data we need to analyze deadlocks after they happen. But one of the problems is that this event monitor is set up with very little space. Because of this, I re-create it whenever I’m setting up a new database. My favorite syntax for that is:

db2 "create event monitor <dl_evmon_name> for deadlocks with details write to file 'ros_detaildeadlock' maxfiles 2000 maxfilesize 10000 blocked append autostart"

You have to have the disk space to support that, and you still may have to clear out old output files by dropping and recreating the event monitor from time to time.

I frequently get questions about having the deadlock event monitor write to tables. My inclination is not to write to tables – mostly because I’ve seen deadlocking issues with hundreds or thousands of deadlocks per hour that just creamed the database – and the additional load of writing to tables during such an issue might make things even worse.

 

 

So I hope something there helps someone who is looking at locking parameters.


Analyzing Deadocks – the old way

$
0
0

In 9.7, DB2 started offering a new monitoring method for deadlocking. Though this post describes the “old” way, this method also works in db2 9.7. Detailedeadlock event monitors have been deprecated, but not yet removed. This means that even in 9.7, you can still create them and work with them.

If you’re at all confused about the difference between deadlocks and lock timeouts, please first read my post on Deadlocks VS. Lock timeouts.

Creating the deadlock event monitor

One of the most critical things here is that you must have the detailedeadlock event monitor in place and working before you run into an issue. By default (even in 9.7), db2 has one called simply ‘db2detaildeadlock’. The only problem with it is that it may run out of space rather quickly. As a result, I re-create it on build, using this syntax (you’ll need a database connection, of course):

db2 "create event monitor my_detaildeadlock for deadlocks with details write to file 'my_detaildeadlock' maxfiles 2000 maxfilesize 10000 blocked append autostart"
DB20000I  The SQL command completed successfully.

In additon, you must actually manually create the directory for the event monitor. It goes in the ‘db2event’ subdirectory of the database path, so in my latest example, I used something like this statement to create it:

mkdir /db_data/db2inst1/NODE0000/SQL00002/db2event/my_detaildeadlock

And then there’s activating the new one and dropping the old one:

db2 "set event monitor ros_detaildeadlock state=1"
DB20000I  The SQL command completed successfully.

db2 "set event monitor db2detaildeadlock state=0"
DB20000I  The SQL command completed successfully.

db2 "drop event monitor db2detaildeadlock"
DB20000I  The SQL command completed successfully.

Finally, you’ll want to verify the event monitor state:

> db2 "select substr(evmonname,1,30) as evmonname, EVENT_MON_STATE(evmonname) as state from syscat.eventmonitors with ur"

EVMONNAME                      STATE
------------------------------ -----------
ROS_DETAILDEADLOCK                       1

  1 record(s) selected.

1 means active for the state, and 0 means not active, so this is the output we want.

Parsing and Analyzing output

Now that you’ve got the event monitor running, what do you do with it? Well, assuming you actually have some deadlocks (as you can tell through the snapshot event monitor, using db2top, db2pd, db2 admin views, or the get snapshot command), you’ll want to flush the event monitor and convert it’s output to human readable form:

> db2 flush event monitor MY_DETAILDEADLOCK
DB20000I  The SQL command completed successfully.
> db2evmon -path /db_data/db2inst1/NODE0000/SQL00002/db2event/my_detaildeadlock >deadlocks.out

Reading /db_data/db2inst1/NODE0000/SQL00002/db2event/ros_detaildeadlock/00000000.evt ...

Your path may be different, of course. I prefer the path option on db2evmon because I’ve had less problems with it. There is an option to specify the dbname and event monitor name – I just find that it’s not as reliable.

So now you’ve done the easy part. Yep, that’s right, that’s the easy part. Depending on the number of deadlocks, you may now have a giant file. I seem to remember parsing a 15 GB one at one time. Here are some snippets of the output to give an idea of what you’re looking at:

379) Deadlock Event ...
  Deadlock ID:   20
  Number of applications deadlocked: 2
  Deadlock detection time: 01/03/2012 14:06:13.425034
  Rolled back Appl participant no: 2
  Rolled back Appl Id: 172.19.10.61.37259.120103200006
  Rolled back Appl seq number: : 0009
...
381) Deadlocked Connection ...
  Deadlock ID:   20
  Participant no.: 2
  Participant no. holding the lock: 1
  Appl Id: 172.19.10.61.37259.120103200006
  Appl Seq number: 00009
  Tpmon Client Workstation: spp27comm02x
  Appl Id of connection holding the lock: 172.19.10.61.62895.120103194755
  Seq. no. of connection holding the lock: 00001
  Lock wait start time: 01/03/2012 14:06:03.651592
  Lock Name       : 0x02000C1A1500BCE31800000052
  Lock Attributes : 0x00000000
  Release Flags   : 0x00000001
  Lock Count      : 1
  Hold Count      : 0
  Current Mode    : none
  Deadlock detection time: 01/03/2012 14:06:13.425119
  Table of lock waited on      : USERS
  Schema of lock waited on     : WSCOMUSR
  Data partition id for table  : 0
  Tablespace of lock waited on : USERSPACE1
  Type of lock: Row
  Mode of lock: X   - Exclusive
  Mode application requested on lock: NS  - Share (and Next Key Share)
  Node lock occured on: 0
  Lock object name: 106899963925
  Application Handle: 47264
  Deadlocked Statement:
    Type     : Dynamic
    Operation: Fetch
    Section  : 2
    Creator  : NULLID
    Package  : SYSSH200
    Cursor   : SQL_CURSH200C2
    Cursor was blocking: FALSE
    Text     : SELECT T1.STATE, T1.MEMBER_ID, T1.OPTCOUNTER, T1.TYPE, T2.FIELD2, T2.REGISTRATIONUPDATE, T2.FIELD3, T2.LASTORDER, T2.LANGUAGE_ID, T2.PREVLASTSESSION, T2.SETCCURR, T2.DN, T2.REGISTRATIONCANCEL, T2.LASTSESSION, T2.REGISTRATION, T2.FIELD1, T2.REGISTERTYPE, T2.PROFILETYPE, T2.PERSONALIZATIONID FROM MEMBER  T1, USERS  T2 WHERE T1.TYPE = 'U' AND T1.MEMBER_ID = T2.USERS_ID AND T1.MEMBER_ID = ?
  List of Locks:
...
383) Deadlocked Connection ...
  Deadlock ID:   20
  Participant no.: 1
  Participant no. holding the lock: 2
  Appl Id: 172.19.10.61.62895.120103194755
  Appl Seq number: 00905
  Tpmon Client Workstation: spp27comm02x
  Appl Id of connection holding the lock: 172.19.10.61.37259.120103200006
  Seq. no. of connection holding the lock: 00001
  Lock wait start time: 01/03/2012 14:06:03.657097
  Lock Name       : 0x02000D0E2F00F8D61800000052
  Lock Attributes : 0x00000000
  Release Flags   : 0x40000000
  Lock Count      : 1
  Hold Count      : 0
  Current Mode    : U   - Update
  Deadlock detection time: 01/03/2012 14:06:13.425274
  Table of lock waited on      : MEMBER
  Schema of lock waited on     : WSCOMUSR
  Data partition id for table  : 0
  Tablespace of lock waited on : USERSPACE1
  Type of lock: Row
  Mode of lock: NS  - Share (and Next Key Share)
  Mode application requested on lock: X   - Exclusive
  Node lock occured on: 0
  Lock object name: 106685792303
  Application Handle: 47206
  Deadlocked Statement:
    Type     : Dynamic
    Operation: Execute
    Section  : 25
    Creator  : NULLID
    Package  : SYSSH200
    Cursor   : SQL_CURSH200C25
    Cursor was blocking: FALSE
    Text     : UPDATE MEMBER  SET STATE = ?, OPTCOUNTER = ? WHERE MEMBER_ID = ? AND OPTCOUNTER = ?
  List of Locks:
...

I’ve removed the list of locks due to length, and also entries on the connection events, but have not altered the actual output here.

The “Deadlock ID” here lets us identify which deadlock this was a participant in. Deadlocks most frequently involve 2 connections, but they can involve 3, 4, 5, or even more.

Looking at “Participant no” both in the “Deadlock Event” section and the “Deadlocked Connection” sections and “Rolled back Appl participant no” in the “Deadlock Event” section, you can understand which statement was rolled back and which was allowed to continue.

There’s a lot more useful information there to parse through – most of it is pretty obvious in its meaning.

It is nice to go through and determine if the same statements were involved in deadlocks over and over again – which statements were most frequently involved in a deadlock. It’s also nice to analyze the timing of the deadlocks – I find summarizing by hour very useful in helping to determine if they were limited to a specific time period. It can also be interesting to summarize by table to see if a particular table is frequently involved.

What to do with the analysis

The number one thing to do with what you find is to provide the SQL to your developers. They should be able to understand where that SQL is coming from in your application, and should be able to come up with ideas to reduce the deadlocking.

Remember that deadlocking is an application problem whose symptoms appear on the database. The sum total of everything you can do that might reduce deadlocking at the database level is:

  1. Keep runstats current
  2. Set the db2 registry variables, ONLY IF YOUR APPLICATION EXPLICITLY SUPPORTS THEM:
  • DB2_SKIPINSERTED
  • DB2_SKIPDELETED
  • DB2_EVALUNCOMMITTED

Increasing LOCKLIST will not help with deadlocking unless you’re also seeing lock escallations.

References:

Info Center entry on deprecation of detaildeadlock event monitor: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.wn.doc/doc/i0054715.html

 

Info Center entry on db2evmon:

http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0002046.html

 

Keep your eyes open for a new post on Analyzing Deadlocks the new way – using the event monitor for locks.

Analyzing Deadlocks – the new way

$
0
0

The section titled “To Format the Output to a Flat File” was updated on 2/13/2012.
Edit on 12/11/2014: This new method of analyzing locking issues became available in DB2 9.7.

So you can still use the old way, and if you want to avoid event monitors that write to tables, that’s still the only way. See Analyzing Deadlocks – the old way for more details on that method.

This new method is not yet enabled by default, but I would expect it would be in a future release. The pattern I’ve seen IBM follow is that in one version, they introduce a new way of doing something and deprecate the old way. In the next version, they generally make the new way the default, requiring you to take more drastic actions to keep using the old one. Then in the following version, they remove the ability to use the old method. Sometimes they allow more than one version to pass in each of those steps, but it sure seems likely that’s the general direction they’re going now.

I haven’t actually done all that much with this in production, though I’m in the process of doing so now, so I don’t have a strong opinion which is better while we have the choice.

Creating the Event Monitor

So first you’ll want to be creating the table for this in a 32k tablespace. So assuming you don’t already have one, you need to do this:

Create the 32k bufferpool (if you’re working with WebSphere Commerce, this exists out of the box, but you still need to run the last two commands):

db2 "CREATE BUFFERPOOL BUFF32K IMMEDIATE SIZE 2500 AUTOMATIC PAGESIZE 32 K"

Create a 32k tablespace (using AST) – only needed if you don’t already have one you want to use for this:

db2 "create large tablespace TAB32K pagesize 32 K bufferpool BUFF32K dropped table recovery on"

Create a 32k temp tablespace (using AST) – only needed if you don’t already have one:

db2 "CREATE SYSTEM TEMPORARY TABLESPACE TEMPSYS32K PAGESIZE 32 K BUFFERPOOL BUFF32K"

Create the Event Monitor for Locks:

>db2 "create event monitor my_locks for locking write to unformatted event table (table dba.my_locks in TAB32K) autostart"
DB20000I  The SQL command completed successfully.
> db2 "set event monitor MY_LOCKS state=1"
DB20000I  The SQL command completed successfully.

Then verify that the event monitor has the correct state:

> db2 "select substr(evmonname,1,30) as evmonname, EVENT_MON_STATE(evmonname) as state from syscat.eventmonitors where evmonname='MY_LOCKS' with ur"

EVMONNAME                      STATE
------------------------------ -----------
MY_LOCKS                                1

  1 record(s) selected.

1 means active, so that’s what we want. 0 means not active.

Setting the Collection Parameters

So this was new to me. In addition to create the event monitor for locks, you also have to enable collection of information in the database configuration or at the workload level. I’m not used to working at the workload level, so I’ll go with what makes more sense to me and give you the parameters to set at the database level.

MON_LOCKTIMEOUT

Changes to this parameter should take effect without a database recycle. By default, this parameter is set to “NONE”. Possible values include:

  • NONE – no data is collected on lock timeouts (DEFAULT)
  • WITHOUT_HIST – data about lock timeout events is sent to any active event monitor tracking locking events
  • HISTORY – the last 250 activities performed in the same UOW are tracked by event monitors tracking locking events, in addition to the data about lock timeout events.
  • HIST_AND_VALUES – In addition to the the last 250 activities perfromed in the same UOW and the data about lock timeout events, values that are not long or xml data are also sent to any active event monitor tracking locking events
Based on similar settings with the deadlock event monitors, I’m going to guess that WITHOUT_HIST would be the most useful for every day use. To set that, use this syntax:
> db2 update db cfg for dbname using MON_LOCKTIMEOUT WITHOUT_HIST immediate
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.

MON_DEADLOCK

The values here are similar to those for MON_LOCKTIMEOUT, however, the default is different. The default here is WITHOUT_HIST. all possible values include:

  • NONE – no data is collected on deadlocks
  • WITHOUT_HIST – data about deadlock events is sent to any active event monitor tracking locking events (DEFAULT)
  • HISTORY – the last 250 activities performed in the same UOW are tracked by event monitors tracking locking events, in addition to the data about deadlock events.
  • HIST_AND_VALUES – In addition to the the last 250 activities perfromed in the same UOW and the data about deadlock events, values that are not long or xml data are also sent to any active event monitor tracking locking events
My initial thought here would also be to go with WITHOUT_HIST.

MON_LOCKWAIT

This has essentially the same options as the last two. By default, this parameter is set to “NONE”. Possible values include:

  • NONE – no data is collected on lock waits (DEFAULT)
  • WITHOUT_HIST – data about lock wait events is sent to any active event monitor tracking locking events
  • HISTORY – the last 250 activities performed in the same UOW are tracked by event monitors tracking locking events, in addition to the data about lock wait events.
  • HIST_AND_VALUES – In addition to the the last 250 activities perfromed in the same UOW and the data about lock wait events, values that are not long or xml data are also sent to any active event monitor tracking locking events
I would be very cautious on setting this one away from the default. I consider some level of lock-waiting absolutely normal. If you were to set the value of the next related parameter – MON_LW_THRESH – too low, this could generate a huge amount of data. On the other hand, if your transaction time is being increased due to lock wait time, this may be valuable to use. I would go with the default of NONE for normal use.

MON_LW_THRESH

The default here is 5 seconds (5,000,000). For whatever reason, this is specified in microseconds (one million to the second). I certainly wouldn’t recommend setting this too low. This parameter only means something if MON_LOCKWAIT is set to something other than NONE.

MON_LCK_MSG_LVL

This parameter indicates what events will be logged to the notify log. Not being a big fan of the notify log, I’m not sure how much I’d use this, but here are the possible values:

  • 0 – No notification of locking phenomena is done, including deadlocks, lock escalations, and lock timeouts
  • 1 – Notification of only lock escalations is done – notifications are not done for deadlocks or lock timeouts
  • 2 – Notification of lock escalations and deadlocks are done – no notification of lock timeouts is done
  • 3 – Notification is done for lock escalations, deadlocks, and lock timeouts
My inclination here would be to set this to 2 – if I’m looking back, I want to know about deadlocks and escalations, and I worry that lock timeouts might be a bit much. To update this, use:
> db2 update db cfg for dbname using MON_LCK_MSG_LVL 2 immediate
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.

Parsing/Analyzing Output

So that last bit was the easy part. That gets you basic collection of the data you need, in an unformated way. Now you have to format the data and make sense of it. You can format the data either to a flat file or to tables. If you have severe locking issues, go for flat file to eliminate additional database load. If your locking issues are more moderate and you still have available capacity on your database server, go with the tables – they’re easier to query and summarize.

Getting output in human-readable format

To format the output to a flat file:

This section updated 2/13/2012.

I have, as yet, been unable to test this method, as the samples referenced everywhere are missing from every 9.7 installation I have. I hope to update this portion of this post in the future when I get this figured out.

I had a bear of a time getting this to work. I started with the instructions in this technical article: http://www.ibm.com/developerworks/data/library/techarticle/dm-1004lockeventmonitor/

But that post references files that were missing from every 9.7 installation I have(over a dozen different servers on at least two different flavors of Linux). I asked a colleage who works on a wider variety of systems, and they did not have the same issue. So I have no idea what the deal is here, but there must be someone else in the same place as me, looking at the directories and doing find commands and so on with no results. Here’s how I finally made it work.

Make the directory $HOME/bin if you don’t already have it

mkdir $HOME/bin

Add to your path $HOME/bin:$HOME/sqllib/java/jdk64/bin (substituting your instance home directory for $HOME). Here’s the syntax I used for that:

export PATH=$PATH:/db2home/db2inst1/bin:/db2home/db2inst1/sqllib/java/jdk64/bin

Into a file in $HOME/bin called db2evmonfmt.java, copy the content from this link: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.apdv.sample.doc%2Fdoc%2Fjava_jdbc%2Fs-db2evmonfmt-java.html

Install db2 on your laptop or a windows system and copy C:\Program Files\IBM\SQLLIB_01\samples\java\jdbc\DB2EvmonLocking.xsl to $HOME/bin. I could not find DB2EvmonLocking.xsl online anywhere, and I don’t want to get myself on the wrong side of IBM legally by posting it myself, so if anyone reads this at IBM, I urge you to make this file available through the info center too.

Compile db2evmonfmt.java using this as the db2instance owner from the $HOME/bin directory:

javac db2evmonfmt.java
It will create two class files in the same directory.
Finally, use this to actually generate the flat-file report:
java db2evmonfmt -d sample -ue DBA.MY_LOCKS -ftext

Where the database name replaces ‘sample’, and the unformatted event monitory table name is “DBA.MY_LOCKS”.

The output’s not bad, overall – I think more useful than the db2detaildeadlock formatted output that we got with the old method. The output you get looks like this:

> java db2evmonfmt -d sample -ue DBA.MY_LOCKS -ftext
SELECT evmon.xmlreport FROM TABLE ( EVMON_FORMAT_UE_TO_XML( 'LOG_TO_FILE',FOR EACH ROW OF ( SELECT * FROM DBA.ROS_LOCKS  ORDER BY EVENT_ID, EVENT_TIMESTAMP, EVENT_TYPE, MEMBER ))) AS evmon

-------------------------------------------------------
Event ID               : 1
Event Type             : LOCKTIMEOUT
Event Timestamp        : 2012-02-07-12.52.54.529330
Partition of detection : 0
-------------------------------------------------------

Participant No 1 requesting lock
----------------------------------
Lock Name            : 0x03002900000000000000000054
Lock wait start time : 2012-02-07-12.52.09.070093
Lock wait end time   : 2012-02-07-12.52.54.529330
Lock Type            : TABLE
Lock Specifics       :
Lock Attributes      : 00000000
Lock mode requested  : Intent Exclusive
Lock mode held       : Exclusive
Lock Count           : 0
Lock Hold Count      : 0
Lock rrIID           : 0
Lock Status          : Converting
Lock release flags   : 40000000
Tablespace TID       : 3
Tablespace Name      : TAB8K
Table FID            : 41
Table Schema         : WSCOMUSR
Table Name           : STAGLOG

Attributes            Requester                       Owner
--------------------- ------------------------------  ------------------------------
Participant No        1                               2
Application Handle    030682                          030734
Application ID        REDACTED                        REDACTED
Application Name      db2bp                           db2jcc_application
Authentication ID     REDACTED                        REDACTED
Requesting AgentID    1728                            1385
Coordinating AgentID  1728                            1385
Agent Status          UOW Executing                   UOW Waiting
Application Action    No action                       No action
Lock timeout value    45                              0
Lock wait value       0                               0
Workload ID           1                               1
Workload Name         SYSDEFAULTUSERWORKLOAD          SYSDEFAULTUSERWORKLOAD
Service subclass ID   13                              13
Service subclass      SYSDEFAULTSUBCLASS              SYSDEFAULTSUBCLASS
Current Request       Execute Immediate               Execute
TEntry state          2                               2
TEntry flags1         00000000                        00000000
TEntry flags2         00000200                        00000200
Lock escalation       no                              no
Client userid
Client wrkstnname                                     REDACTED
Client applname
Client acctng

Current Activities of Participant No 1
----------------------------------------
Activity ID        : 1
Uow ID             : 14
Package Name       : SQLC2H21
Package Schema     : NULLID
Package Version    :
Package Token      : AAAAAPAa
Package Sectno     : 203
Reopt value        : none
Incremental Bind   : no
Eff isolation      : CS
Eff degree         : 0
Eff locktimeout    : 45
Stmt first use     : 2012-02-07-12.31.06.297485
Stmt last use      : 2012-02-07-12.31.06.297485
Stmt unicode       : no
Stmt query ID      : 0
Stmt nesting level : 0
Stmt invocation ID : 0
Stmt source ID     : 0
Stmt pkgcache ID   : 1069446856888
Stmt type          : Dynamic
Stmt operation     : DML, Insert/Update/Delete
Stmt text          : delete from attr where attr_id >= 7000000000000000001 and attr_id <= 7000000000000000020

To format the output to tables (to tables in the ‘DBA’ schema for our example):

> db2 "call EVMON_FORMAT_UE_TO_TABLES ('LOCKING', NULL, NULL, NULL, 'DBA', NULL, NULL, -1, 'SELECT * FROM DBA.MY_LOCKS ORDER BY event_timestamp')"

  Return Status = 0

If you receive this error:

SQL0171N  The data type, length or value of the argument for the parameter in
position "3 (query) " of routine "XDB_DECOMP_XML_FROM_QUERY" is incorrect.
Parameter name: "".  SQLSTATE=42815

check to make sure that you’ve specified the right values for the type of event monitor, the schema where the tables should be created, and the table name the event monitor is writing to. Also check to make sure you have the appropriately sized system temporary tablespace, as an order-by on the un-formatted event monitor table will fail without it.

Assuming this is the first time you’ve run EVMON_FORMAT_UE_TO_TABLES, you’ll see a number of new tables:

LOCK_ACTIVITY_VALUES
LOCK_EVENT
LOCK_PARTICIPANTS
LOCK_PARTICIPANT_ACTIVITIES

Some interesting SQL to help you navigate the output in tables

Please first note that this SQL is tested only for functionality and not for performance – I haven’t explained it or done any optimization on it, so expect it to run poorly for large amounts of data.

First to list all of the locking events:

> db2 "select event_id, substr(event_type,1,18) as event_type, event_timestamp, dl_conns, rolled_back_participant_no from DBA.LOCK_EVENT order by event_id, event_timestamp with ur"

EVENT_ID             EVENT_TYPE         EVENT_TIMESTAMP            DL_CONNS    ROLLED_BACK_PARTICIPANT_NO
-------------------- ------------------ -------------------------- ----------- --------------------------
                   1 DEADLOCK           2012-01-23-15.36.56.036831           2                          2
                   1 DEADLOCK           2012-01-23-15.36.56.036831           2                          2
                   2 LOCKTIMEOUT        2012-01-23-15.43.06.875032           -                          -
                   2 LOCKTIMEOUT        2012-01-23-15.43.06.875032           -                          -

  4 record(s) selected.

To summarize counts

> db2 "select substr(event_type,1,18) as event_type, count(*) as count, sum(dl_conns) sum_involved_connections from DBA.LOCK_EVENT group by event_type with ur"

EVENT_TYPE         COUNT       SUM_INVOLVED_CONNECTIONS
------------------ ----------- ------------------------
DEADLOCK                     2                        4
LOCKTIMEOUT                  2                        -

  2 record(s) selected.

To summarize counts by hour

> db2 "select substr(event_type,1,18) as event_type, year(event_timestamp) as year, month(event_timestamp) as month, day(event_timestamp) as day, hour(event_timestamp) as hour, count(*) as count from DBA.LOCK_EVENT group by year(event_timestamp), month(event_timestamp), day(event_timestamp), hour(event_timestamp), event_type order by year(event_timestamp), month(event_timestamp), day(event_timestamp), hour(event_timestamp), event_type with ur"

EVENT_TYPE         YEAR        MONTH       DAY         HOUR        COUNT
------------------ ----------- ----------- ----------- ----------- -----------
DEADLOCK                  2012           1          23          15           2
LOCKTIMEOUT               2012           1          23          15           2

  2 record(s) selected.

To summarize by table and lock event type:

> db2 "select substr(lp.table_schema,1,18) as table_schema, substr(lp.table_name,1,30) as table_name, substr(le.event_type,1,18) as lock_event, count(*)/2 as count from DBA.LOCK_PARTICIPANTS lp, DBA.LOCK_EVENT le where lp.xmlid=le.xmlid group by lp.table_schema, lp.table_name, le.event_type order by lp.table_schema, lp.table_name, le.event_type with ur"
TABLE_SCHEMA       TABLE_NAME                     LOCK_EVENT         COUNT
------------------ ------------------------------ ------------------ -----------
DB2INST1           INVENTORY                      DEADLOCK                     2
DB2INST1           PURCHASEORDER                  DEADLOCK                     2
DB2INST1           PURCHASEORDER                  LOCKTIMEOUT                  2
-                  -                              LOCKTIMEOUT                  2

  4 record(s) selected.

If you want to only look at deadlocks or lock timeouts, it is easy to add a where clause on event_type to these queries.

To summarize by statement:

> db2 "with t1 as (select STMT_PKGCACHE_ID as STMT_PKGCACHE_ID, count(*) as stmt_count from dba.lock_participant_activities group by STMT_PKGCACHE_ID) select t1.stmt_count, (select substr(STMT_TEXT,1,100) as stmt_text from dba.lock_participant_activities a1 where a1.STMT_PKGCACHE_ID=t1.STMT_PKGCACHE_ID fetch first 1 row only) from t1 order by t1.stmt_count desc with ur"

STMT_COUNT  STMT_TEXT
----------- ----------------------------------------------------------------------------------------------------
          4 update db2inst1.purchaseorder set ORDERDATE=current timestamp - 7 days where POID='5000'
          2 update db2inst1.INVENTORY set QUANTITY= QUANTITY+5 where PID='100-101-01'

  2 record(s) selected.

The substr on stmt_text in the above statement is included for readability only – I would recommend removing that substr when actually using this SQL.

If you want to do the same thing, counting only statements invloved in deadlocks, try this:

> db2 "with t1 as (select STMT_PKGCACHE_ID as STMT_PKGCACHE_ID, count(*) as stmt_count from dba.lock_participant_activities where XMLID like '%DEADLOCK%' group by STMT_PKGCACHE_ID) select t1.stmt_count, (select substr(STMT_TEXT,1,100) as stmt_text from dba.lock_participant_activities a1 where a1.STMT_PKGCACHE_ID=t1.STMT_PKGCACHE_ID fetch first 1 row only) from t1 with ur"

STMT_COUNT  STMT_TEXT
----------- ----------------------------------------------------------------------------------------------------
          2 update db2inst1.purchaseorder set ORDERDATE=current timestamp - 7 days where POID='5000'
          2 update db2inst1.INVENTORY set QUANTITY= QUANTITY+5 where PID='100-101-01'

  2 record(s) selected.

 

References

Statement on deprication of event monitor for detaildeadlocks: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.wn.doc/doc/i0054715.html

Syntax for creating lock event monitor: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0054074.html

Syntax for EVMON_FORMAT_UE_TO_TABLES: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.rtn.doc/doc/r0054910.html

Reference on tables data is written to: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.mon.doc/doc/r0055559.html

Good article on this topic on DW: http://www.ibm.com/developerworks/data/library/techarticle/dm-1004lockeventmonitor/ – this also has descriptions of how to artificially create a deadlock on the sample database for testing purposes, but doesn’t include SQL to parse the output if you write the formatted data out to tables.

 

 

How to use the DB2 Governor to force off long idle connections

$
0
0

Ok, so the DB2 Governor is deprecated with 9.7. But it’s only replacement is a pay-for-use tool – the workload manager. So I imagine I’ll be writing a script to do the very basic things that I do with the DB2 Governor when it’s gone.

The DB2 Governor has a lot of purposes. It can be used to change the priority or limit connections based on a variety of criteria, but the way I use it is very basic. The main way I use it is to prevent log file saturation caused by idle connections. For a definition of log file saturation if you’re not familiar with it, read the bottom of this post: Managing db2 transaction log files.

I use the DB2 Governor to force off connections that have been idle for more than 4 hours. I’ve seen a number of cases where a hung connection causes log file saturation by holding on to an older log file. In many cases, these connections are random one-off connections that are not intentionally idle – either a developer has left a connection open or massload or some other tool has failed without releasing all resources.

Most WebSphere Commerce connections have some activity every 10 minutes or more frequently. If you’re running some other app, you’d obviously have to consider what timing is appropriate for you. Some applications may actually require a connection to the database with a lot of idle time.

Creating the governor config file

The config file is pretty simple:

1  { Wake up once every three minutes, database name is sample }
2  interval 180; dbname sample;
3
4  desc "Force off java applications idle for more than 4 hours"
5  applname java,java.exe
6  setlimit idle 14400
7  action force;

The leading numbers are added for convenience here, and are  not a part of the file.

Line 1 is a comment line. Anything in curly brackets is a comment, and at least one line of comment is nice.

Line 2 simply sets the wake-up interval and database name. These clauses can be specified only once per file, so only one database and interval may be specified.

Line 4 is simply a descriptive line

Line 5 sets the application names that will be affected. In my case, I only want to affect java connections, not command line or other connections

Line 6 sets a limit for the idle time of 14,400 seconds or 4 hours.

Line 7 indicates that when the limit above is encountered, the connection in question will be forced off of the database.

Starting the governor

With the above in a file named whatever you like and in whatever location you like, you can then start the governor using this syntax:

db2gov start sample dbpartitionnum 0 /fully/qualified/path/db2gov.sample.cfg db2gov.sample.log

The database name in this example is ‘sample’. The dbpartionnum clause is required even on single partition databases to prevent some odd error messages. The log file is created in the instance home directory under sqllib/logs.

If you get this error:

GOV1007N Governor already flagged as running

Then you must stop the governor before the start will succeed.

Keeping the governor up

There is no autostart function for the governor that I am aware of. So that leaves us to solve the situations of starting the governor on db2start or reboot and also if it should happen to crash. My preferred solution here is to run a simple script every 15 minutes or so that checks and sees if the governor is up, and starts it if not. Such a script has to also be able to either handle the GOV1007N error, or to always stop the governor before starting it.

Stopping the governor

Stopping the governor is quite simple:

db2gov stop sample dbpartitionnum 0

The only details we have to specify are the database name and the same dbpartitionnum clause.

Introducing Parameter Wednesday – DBM CFG: NUMDB

$
0
0

This is a new blog post format I’m introducing. I’m declaring Wednesday Parameter Day. That means each Wednesday, I’ll pick a parameter and cover it in excruciating detail. Some of the details will come straight out of the info center, but I’ll add my own experiences and insight geared towards e-commerce databases and throw in specifics for WebSphere Commerce from time to time. I’m selecting my own order – they’re not necessarily going to be in the order of where they are listed or alphabetical or even of impact or anything. That also means if you want more details on a parameter, comment or email me and I’ll generally be glad to slip it in. I’m starting with a relatively simple one to get the format worked out.

DB2 Version This Was Written For

9.7

Parameter Name

NUMDB

Where This Parameter Lives

Database Manager Configuration

Description

Defines the maximum number of databases that can be concurrently active

Impact

If this is set too low, you will get an error. May impact how memory is allocated, so shouldn’t be set too high.

Default

8 (UNIX/Linux)

8 (Windows server with local and remote clients)

3 (Windows server with local clients)

Range

1-256

Recycle Required To Take Effect?

DB2 instance recycle is required for this to take effect.

Can It Be Set To AUTOMATIC?

No, this cannot be set to AUTOMATIC

How To Change It

db2 update dbm cfg using NUMDB N

where N is the number you wish to set NUMDB to

Rule of Thumb

Leave this at the default unless you specifically know you will have more than that number of concurrently active databases.

Tuning Considerations

Since an error message is returned when you need to change this parameter, there isn’t a lot of tuning to be done.

Related Error Messages

SQL1041N The maximum number of concurrent databases have already been
started. SQLSTATE=57032

This error indicates that NUMDB needs to be higher.

War Stories From The Real World

This is generally a pretty boring parameter. I’ve seen as many as 14 databases on a single instance. I would generally aim for having fewer than 8 databases on an instance anyway. I’m a big fan of the one database on one instance approach unless we’re talking about small databases like configuration databases or ESB databases. I’ve certainly had to increase the parameter, but there’s nothing complicated about doing that.

Link To Info Center

http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.config.doc/doc/r0000278.html

Related Parameters

The info center lists all the parameters that control memory allocated on a per-database basis. See the info center link above to see those links.

What DBAs can do to Reduce Deadlocks

$
0
0

Deadlocking is an application problem. There are only a few things that DBAs can do to reduce deadlocking, and they all require buy-in from the application. Let me repeat that another way. Don’t set the parameters mentioned here without understanding the impact on your application.

Currently Committed

This is new behavior in DB2 9.7. It has a similar effect to Oracle’s locking methodology in that “Readers don’t block writers and writers don’t block readers”. If you’ve created your database on 9.7, it is on by default. If you upgrade to 9.7, you can turn it on like this:

db2 update db cfg for <dbname> using CUR_COMMIT ON

It is a database configuration parameter, so you’d have to look at it for each database if you have more than one on an instance.

One thing I like about this is that it can be set separately from DB2_COMPATIBILITY_VECTOR. For applications like WebSphere Commerce that don’t support DB2_COMPATIBILITY_VECTOR, this is nice.

Because this reduces locking overall, it also reduces deadlocking. It also changes how locks are acquired, so your application should explicitly support this setting. WebSphere Commerce 7 supports it. WebSphere Commerce 6 does not support it.

Three DB2 Registry Parameters

DB2_EVALUNCOMMITTED

This DB2 registry parameter is one of three that changes how db2 locks rows. As such it is dangerous and should only be used if your application explicitly supports its use. WebSphere Commerce supports the use of all three. Be careful of DB2 instances where you have more than one database – this is set at the instance level, and you’ll want to make sure that all applications accessing any database in the instance support these parameters.

This DB2 registry parameter allows db2 to evaluate rows to see if they meet the conditions of the query BEFORE locking the row when using RS or CS isolation levels. The normal behavior for these isolation levels would be to lock the row before determining if it matched.

To check your current value:

$ db2set -all |grep DB2_EVALUNCOMMITTED
[i] DB2_EVALUNCOMMITTED=YES

If nothing is returned from this command, then the parameter is not set. The entire db2 instance must be recycled (db2stop/db2start) for changes to this parameter to take effect.

DB2_EVALUNCOMMITTED info center entry

DB2_SKIPDELETED

This DB2 registry parameter is one of three that changes how db2 locks rows. As such it is dangerous and should only be used if your application explicitly supports its use. WebSphere Commerce supports the use of all three. Be careful of DB2 instances where you have more than one database – this is set at the instance level, and you’ll want to make sure that all applications accessing any database in the instance support these parameters.

This DB2 registry parameter allows DB2 to skip uncommitted deleted rows during index scans. If it is not set, db2 will still evaluate uncommitted deleted rows during index scans. The normal behavior is for DB2 to evaluate uncommitted deleted rows in indexes until they are actually committed.

To check your current value:

$ db2set -all |grep DB2_SKIPDELETED
[i] DB2_SKIPDELETED=ON

If nothing is returned from this command, then the parameter is not set. The entire db2 instance must be recycled (db2stop/db2start) for changes to this parameter to take effect.

DB2_SKIPDELETED info center entry

DB2_SKIPINSERTED

This DB2 registry parameter is one of three that changes how db2 locks rows. As such it is dangerous and should only be used if your application explicitly supports its use. WebSphere Commerce supports the use of all three. Be careful of DB2 instances where you have more than one database – this is set at the instance level, and you’ll want to make sure that all applications accessing any database in the instance support these parameters.

This DB2 registry parameter allows DB2 to skip uncommitted newly inserted rows. If this parameter is not set, DB2 waits for the inserts to be committed or rolled back before continuing – you can see how this might not be ideal for a database that requires high concurrency. Like the others, this applies to CS and RS isolation levels.

To check your current value:

$ db2set -all |grep DB2_SKIPINSERTED
[i] DB2_SKIPINSERTED=ON

DB2_SKIPINSERTED info center entry

Increasing LOCKLIST only helps if you’re seeing lock escalations

I put that whole sentence in a heading because I very frequently run into someone who wants me to change LOCKLIST to deal with a deadlocking issue. Increasing LOCKLIST will only help if you’re actually seeing lock escalations. Lock escalations are noted both in the db2diag.log and in counters in the database snapshot.

What won’t help

Changing the LOCKTIMEOUT database configuration parameter will only help if you have it set unreasonably high (higher than 90 seconds for an e-commerce database), and then only in deadlocks that are side-effects of long lock waits.

Changing DLCHKTIME will not help reduce deadlocking.

Deadlocking is an application or database design problem

Deadlocking is an application problem that manifests in the database. Even if it isn’t a database problem, DBAs frequently help developers troubleshoot issues. See my blog entries on analyzing deadlocks to get an idea of how to do this.

Analyzing Deadocks – the old way

Analyzing Deadlocks – the new way

Viewing all 39 articles
Browse latest View live