Showing posts with label General IT. Show all posts
Showing posts with label General IT. Show all posts

Tuesday, July 28, 2015

Windows 10 and E-Business Suite

Tomorrow (July 29, 2015) we will see the release of Windows 10 into the wild.  PC's all over the world that aren't under "enterprise control" have been signing up to download the update through Microsoft's "Windows Update" delivery mechanism and it is "going to drop" on July 29th, 2015.

For us (E-Business Suite DBAs) there should be at least some level of concern.

Because they are using this mechanism, this will likely be the largest mass deployment of software that anyone has witnessed.  Also, due to the delivery method, I would expect that Windows 10 uptake will significantly surpass any other Windows roll-out in history.

Remember, with previous Windows upgrades, you either had to go out and purchase the software or you got it when you acquired a new laptop.  This meant that, with few exceptions, those of us who have to deal with these changes at least had some breathing room before we really felt it.

Beginning tomorrow, however, anyone with a licensed copy of Windows 7 (SP1), Windows 8, or Windows 8.1, could potentially upgrade to Windows 10.  While this really isn't a concern for most corporate PC's (where software updates/upgrades are managed by the centralized IT department), if you're on a project where users and/or consultants have "unmanaged" PCs, you could encounter some questions.

The first thing you need to know is that, of course, nothing about Windows 10 has been officially "certified" by anyone at Oracle yet.  So, you could always stand behind that statement.  And, certainly, if your IT department is even considering rolling out Windows 10 to anyone, they should wait until that certification information is released.

Now, for those of you who are just wondering, does it even work?  The answer is, yes, it appears to... but there are a few things you should know.

First, Windows 10 ships with a new minimalistic browser called "Microsoft Edge" (also known as "Project Spartan").  The browser works pretty well and the interface is clean, which is nice.  But, Microsoft Edge doesn't support plug-ins (specifically, it no longer supports Active-X).   This means that you will be unable to launch Java from the Edge browser which, in turn, means that you won't be able to launch Oracle Forms from within E-Business Suite.

But, never fear.  Windows 10 also ships with Internet Explorer 11, which is certified in many E-Business Suite configurations.  I performed some rather limited testing (log into R12.1.3, launch forms, basic navigation) using a recent pre-release version of Windows 10 (x86-64, build 10301) and Java JRE 1.8.0_51 (32-bit) and everything appears to function without much issue.  Obviously, this was far from a complete test and I wouldn't go into production with it.  Fortunately, since Oracle has already certified Internet Explorer 11 on Windows 7 and 8.1 (notably, not Windows 8) with EBS, I doubt that certification for IE11+EBS+Windows 10 will find too many problems.



So, the long and short of it is, should you encounter that Windows 10 early adopter, they should have some luck using Internet Explorer 11 (assuming that you're patched up to support it per MOS 389422.1, of course).    Of course, should the user be technically inclined and still want to remain an early adopter, I strongly recommend running an older version of Windows (Windows XP or Windows 7) in a VirtualBox VM.   It's a great way to be current and still be able to use some of the really old tools.  (Workflow Builder, anyone?)


Wednesday, June 26, 2013

Oracle Database 12c Is Available for Download

File this under "it's about time" and "ICYMI (In Case You Missed It), but Oracle has released Database 12c (12.1.0.1.0). Downloads can be found on their TechNet and E-Delivery sites. At this point, the only available versions are for Linux (x86-64), Solaris (Sparc64), and Solaris (x86-64). Other platforms will surely follow.

Not officially released... yet

According to media reports (and my inability to find an actual press release from Oracle), the formal launch of Database 12c will occur "within a couple of weeks".

Differences between TechNet and E-Delivery

While, otsensibly, it may be the same software, there is always the possibility that you'll get slightly different versions. The software that you download from TechNet is usually in the form of either a zip file or a "tarball" of the staged installation. The downloads from E-Delivery are also zip files, but they represent the actual media packs (CD or DVD). For some reason, Oracle doesn't do ISOs, but, nevertheless, the E-Delivery downloads are typically viewed as more "supported". As a result, I recommend using the E-Delivery downloads rather than TechNet if you're planning on doing anything that is going to need to be handled under a support contract.

Naturally, for either method, you will have to agree to license terms and export conditions. If you have never used E-Delivery from your Oracle account, there might be a slight delay as your account is verified by Oracle.

As with all new software, be sure to test thoroughly and make sure any applications are certified with 12c before deploying to production.

Oracle Client 12c is also available

The Oracle 12c Client can also be downloaded for the following platforms: Linux (x86-32), Linux (x86-64), Microsoft Windows (x86-32), Microsoft Windows (x86-64), Solaris (Sparc 64), Solaris (Sparc 32), Solaris (x86-32), Solaris (x86-64).

NOT CERTIFIED WITH E-BUSINESS SUITE

Since this blog is focused on E-Business Suite (and E-Business Suite is what I do), I feel the need to state that Database 12c is NOT certified with ANY RELEASE of E-Business Suite at this point. I suspect that we'll see it certified against 12.1.3 and the upcoming 12.2 at some point in the future (maybe 12.2 on release). It is highly unlikely (in my opinion) to be certified against any release 11i. In the event that it is certified against 11i, you can bet that it will be a pretty low priority item.

You can find them available here:

Oracle E-Delivery: https://edelivery.oracle.com


-- James

Tuesday, February 5, 2013

Deciphering support and licensing issues surrounding Oracle on VMWare


I frequently run into clients that are wanting to run Oracle products in their VMWare cluster. Since I generally deal with E-Business Suite customers, I tend to take the position of "anything that swallows machines whole should probably have a physical machine" approach to production systems. However, I can see some distinct advantages to virtualization, particularly when it comes to managing numbers of non-production environments.

Unfortunately, there is a lot of confusion out there as it relates to Oracle and virtualization... particularly surrounding one of the most popular virtualization solutions, VMWare. I'll try to provide my best understanding of the issues here.

Are Oracle products certified on VMWare?

The short answer is, NO. But, that really shouldn't be that much of a concern. Keep in mind that a VMWare Virtual Machine is, technically, hardware. Oracle doesn't tend to certify against hardware. And that's what that VMWare really is, it's "virtual hardware". As such, it's really no different than a particular model of Dell or HP ProLiant.

What Oracle does do is certify against a platform. A platform is the combination of a particular version of an operating system (Solaris 10 vs. Solaris 11, for example) and processor architecture (Sun SPARC vs. Intel x86-32 or Intel x86-64). In the case of a deployment to VMWare, your platform will be determined by the operating system that you intend to run inside of the virtual machine. (For example, Red Hat Enterprise Linux 4/5/6 for x86 or x86-64).

Are Oracle products supported on VMWare?

Oracle's official support position can be found in MOS Note 249212.1, copied below (emphasis mine):

Support Position for Oracle Products Running on VMWare Virtualized Environments [ID 249212.1]

Purpose
---------
Explain to customers how Oracle supports our products when running on VMware

Scope & Application
----------------------
For Customers running Oracle products on VMware virtualized environments. No limitation on use or distribution.


Support Status for VMware Virtualized Environments
--------------------------------------------------
Oracle has not certified any of its products on VMware virtualized environments. Oracle Support will assist customers running Oracle products on VMware in the following manner: Oracle will only provide support for issues that either are known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.

If a problem is a known Oracle issue, Oracle support will recommend the appropriate solution on the native OS. If that solution does not work in the VMware virtualized environment, the customer will be referred to VMwar for support. When the customer can demonstrate that the Oracle solution does not work when running on the native OS, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.

If the problem is determined not to be a known Oracle issue, we will refer the customer to VMware for support. When the customer can demonstrate that the issue occurs when running on the native OS, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.

NOTE: Oracle has not certified any of its products on VMware. For Oracle RAC, Oracle will only accept Service Requests as described in this note on Oracle RAC 11.2.0.2 and later releases.

In my understanding of the actual way that the policy is applied, it's really a matter of whether or not the support engineer suspects VMWare to be the culprit. What I'm saying here is that, generally speaking, the support engineer will work your issue the same way that he/she would if you were on physical hardware. However, once that engineer thinks that VMWare could be the cause of your problem, they reserve the right to "punt" and say "call us back once you've reproduced it on physical hardware".

Now, VMWare, to their credit, has a policy that they call "Total Ownership", where they will accept accountability for any Oracle-related issues. You can read their official policy at the link below.


It is my understanding that, as part of the "Total Ownership" policy, VMware will reproduce the problem on physical hardware for the customer if Oracle decides that VMWare is the problem.

What about Licensing?

Part of the big problem I've always had with Oracle on VMWare is caused by Oracle's per-CPU licensing policy. My original understanding was that, if you have a total of 64 cores in your VMWare cluster, it didn't matter if you were only using 8 cores for Oracle. Oracle would tell you that you had to pay for 64 cores. The idea behind this is that you could, potentially, resize the virtual machine to suit certain needs. Maybe you need more horsepower during month end?

What I've since learned is that Oracle has a policy document (below) that talks about "soft" vs. "hard" partitioning.


What I've described above would fall under the concept of "soft partitioning". However, "hard partitioning" methodologies allow for a different approach. VMWare has (naturally) a nice document that explains their approach to implementing clusters that are in compliance with Oracle's licensing requirements.


From that document, pay particular attention to section 2.2. In that section (specifically Scenario B), they discuss DRS Host Affinity rules and VMWare CPU pinning. (emphasis mine)

2.2 Clusters: Fully Licensed Versus Partially Licensed Clusters

Scenario B: Partially Licensed Clusters

When a customer does not have enough Oracle application instances to justify creating a dedicated cluster for those applications, only a subset of the hosts in the cluster are licensed for the Oracle application. In this situation, the customer must be careful to restrict the movement of Oracle application instances and virtual machines to only those hosts that are licensed to run the product.

In this case, DRS Host Affinity rules can be used to appropriately restrict the movement of virtual machines within the cluster. DRS Host Affinity is a vSphere feature that enables you to ensure that your Oracle applications are restricted to move only between a subset of the hosts—that is, not all hardware in the cluster is “available” to the Oracle software. DRS Host Affinity is a clustering technology and is not a mechanism for soft or hard partitioning of the servers. As explained in section 2.1, using VMware CPU pinning to partially license a host is not currently recognized by Oracle as a “hard partitioning” mechanism that receives subsystem pricing. However, once you have fully licensed the host, you have the right to design your environment such that the Oracle workloads are free to run on the licensed hosts inside the cluster. At present, Oracle does not have any stated policy regarding clustering mechanisms or DRS Host Affinity. Customers can easily maiatain records for compliance purposes as explained in section 2.3.

The advantages of this approach are similar to the advantages achieved with a fully licensed cluster. Because customers are typically able to increase the utilization of licensed processors, they reduce license requirements. However, consolidation ratios tend to be lower, because advanced vSphere features can be employed only on a smaller subset of the hosts.

VMWare CPU pinning is a feature that (in my understanding) would allow you to say that a given VM would only use certain cores in a physical host. So, if you have a single host with 16 cores, you can "pin" a given VM to four of them. According to Oracle's partitioning document (and VMWare's document), you would still be required to pay for all 16 cores in the box. The basic logic here is that Oracle's licensing policy is based on the number of cores in a physical server. You can't license part of a box. Period. No exceptions.

On the other hand, DRS Host Affinity, is a way to pin a virtual machine to a given host (or collection of hosts) within a cluster. So, let's say that you have ten (10) 8-core physical hosts (total of 80 cores) in your VMWare cluster. Using DRS Host Affinity, youcould restrict your Oracle VMs to a subset of those physical hosts. For example, if you restricted your Oracle VMs to only five (5) of those physical hosts, VMWare's contention is that you would only have to license 40 cores.

I sould probably include the standard "IANAL" (I am not a lawyer) disclaimer. I'm also not a VMWare administrator. What I am is a DBA and an IT Geek. That's pretty much the limit of it.

Hopefully this provides some clarity on the issue.

For further reading on the subject, here are a couple of blog links that I used in my research:


James

Why I don't depend on TOAD (or OEM) and neither should you.


My apologies in advance, as this posting may sound like something of a rant.

The first thing I'd like to point out is that I have no real problem with TOAD, Oracle Enterprise Manager, or Windows-based editors. They are all excellent tools that can be extremely helpful in your environment. My objection to these tools is based solely on a lowest-common-denominator argument.

First, a little background. Back in the early 1990's, I was working as a Unix Systems Administrator for a company in Kansas City, MO. Since then, I've worked mainly as a consultant.

Shortly before I started that job in Kansas City, the company had hired a new CIO who let go about half of the legacy (mainframe, COBOL) IT department. The new direction for the company was implementation of Oracle E-Business Suite on Data General Unix (DG/UX).

The mainframe IT staff that survived were being re-trained in the new technology. At one point, several of them came to me insisting that I install ISPF (an editor they were used to on the mainframe) onto the DG/UX boxes because they were struggling to learn to use the vi editor. I informed them that, while they (as a group) may carry enough weight to convince the CIO to direct me to install it (assuming it was even available). However, when they go to their next job and claim that "they know Unix", they would be alone and wouldn't have that leverage.  My suggestion was that I would help them to learn the vi editor. (I did offer emacs as an alternative, since it is and was extremely common on Unix systems... Unfortunately, friendlier editors like pico, nano, and joe didn't exist yet.)

If your primary job is software development, a tool like TOAD is generally something you can depend on having. However, as a DBA, you can't necessarily depend on having TOAD (or even Oracle Enterprise Manager) at your disposal at all times. Maybe you're starting a new job and the previous DBA hadn't set up Enterprise Manager (or you haven't gotten around to it yet). Even in environments where those tools are available, they may or may not be working when you need them.

So, my advice? There are certain tools that are almost ALWAYS there. Get comfortable with ssh, SQL*Plus, and vi (or vim).  They are your friends.

James

Friday, June 8, 2012

On DBAs, Developers, and Performance


Cary Millsap has an excellent (as usual) blog posting today about the software development process employed by so many oranizations.

You can read his full posting here:


This plays into one thing that I see quite a bit as a DBA at client sites. Most developers that I encounter at client sites don't tend to focus so much on performance (unless it is painful during their testing). This isn't, specifically, their fault. In many cases, the developers are under significant pressure to "make it work" and move onto the next thing. As a result, they don't really have the time to worry about performance. Besides, that's the DBA's job, isn't it?

Well, actually, it isn't. See, here's the thing, from a DBA perspective, we have a relatively small handful of tools at our disposal. From the bottom-up, here is a basic list of things that a DBA generally does to impact the performance of a system:

Properly configure disk and I/O. This is (or should be) really a collaboration between the DBA, the Systems Administrators, and the Storage Administrators. Done correctly, this shouldn't really be a problem. However, as with everything, it is still possible to screw it up. Make sure that you have redundant I/O paths that have sufficient bandwidth for your system. Spread your I/O across many physical drives. With RAID technologies (particularly RAID 1/0) this is VERY easy to accomplish. Forget about the old concept of the DBA moving datafiles around to avoid a "hot disk". The disk array can do this at a much more granular and consistent level than any human possibly can. Your primary goal here should be to have as many spindles involved in every I/O operation as possible.

Properly size your hardware. Another collaboration between the DBA and Systems Administrator. Make sure you have adequate horsepower and RAM. And ALWAYS plan for growth! My general recommendation is always to put more RAM into the box than you think you'll need. Given that so many systems that I encounter these days are x86-based Linux systems (rather than "big iron" like Solaris or AIX), memory for these systems is a relatively small portion of their cost. Also, RAM doesn't impact your Oracle licensing!

Properly tune your OS Kernel and Database parameters. I think that this is one area where developers, managers, and users tend to have gross misconceptions. While it's true that tuning a system to truly "optimal" performance is a dark art, the truth is that, unless you've really screwed something up (sized your SGA too small, too few buffers, etc.), odds are you're not going to see huge performance gains by tweaking these parameters. In most cases, "decent performance" is fairly easy to achieve. Now, squeezing out that extra 20%? That can require in-depth analysis of statistics, utilization, I/O patterns, etc. This is where the "dark art" comes into play. And, honestly, this requires time to observe and adjust.

Unfortunately, too many developers, managers, and even users, seem to wonder why that idiot DBA didn't just set the magic "MAKE_SQL_RUN_FASTER=TRUE" parameter. (Shh! Don't tell anyone, that's OUR little secret!)

The truth is, unless something is wrong at these lower levels, the biggest performance gains are going to come from tuning the code. And, in my opinion, since it's the developer's job to produce a quality product, it's ultimately the developer's job to do tune their code.  Unfortunately, as Cary points out, too many organizations are structured in a manner that breaks the feedback loop required for developers to do this properly.

That MUST be fixed.

James

Friday, March 30, 2012

Stupid Unix Tricks... Part Two (Remote Command Execution using SSH)



So, let's say that you wanted to have a script on your dbTier that will reach out to your appsTier and shut down the applications. Maybe this is your system-level shutdown script so that when the Unix administrator shuts down the dbTier, everything is shut down nice and neat like...

For the purpose of this exercise, we're going to need to assume that the APPS password is known to the script (how you do that might be the subject of another blog posting). We're also going to assume that the Unix environment is set automatically (and without prompting) on the remote system.

So, how do you do it?

Well, first you have to set up ssh pre-shared keys. This will allow you to login without being asked for a password. (See my earlier posting: Password-less Login Using SSH Pre-Shared Keys)

Once that is configured, you can use a command like this:

ssh applmgr@myappstier.mydomain "cd ${ADMIN_SCRIPTS_HOME};./adstpall.sh apps/${APPSPW}" 2>&1 |tee -a ${LOG}

A few things here. First, you'll notice that I'm actually executing TWO commands remotely. The "cd" to change directories and then the adstpall.sh script (the semicolon allows me to do that in Unix). Secondly, there are environment variables. Here's the thing about those environment variables. In the command above, they are NOT evaluated on the target system. They are evaluated locally on the SOURCE system. If you want to use variables that are local to the target, you're going to have to "escape" them.

For example, this one will use a variable evaluated on the source machine:

ssh applmgr@myappstier.mydomain "echo ${CONTEXT_NAME}"

And this one will use a variable evaluated on the target machine:

ssh applmgr@myappstier.mydomain 'echo ${CONTEXT_NAME}'

Similarly, you can evaluate a variable on the target by “escaping” it:

ssh applmgr@myappstier.mydomain "echo \${CONTEXT_NAME}"

At one client, their standard is to use a script that wraps around the standard "oraenv" to set their environment variables. As a result, every time they log in, they are greeted with a prompt asking them to choose their environment.

This raised an interesting problem for some of the automated processes we were trying to deploy. The automation was driven from a remote box and would need to ssh over to a target box and issue commands. So, how do we configure the environment so that a user logging in interactively is prompted and one issuing a command remotely through ssh isn't? Well, it turns out that, on Linux at least, that remote command doesn't get assigned a TTY. So, we've made a change to the .bash_profile on the target node that looks something like this:

if tty | fgrep pts ; then
#
# Normal, interactive logins
#
export ORAENV_ASK=YES
else
#
# Human-less logins (ssh "command")
# (Suppress output and bypass prompting for oracle environment)
#
export ORAENV_ASK=NO
fi

Now, let's assume you want to be a little more elaborate. You want to clean up extraneous output and capture the results of the command in your logfile (represented by the environment variable ${LOG}):

ssh applmgr@myappstier.mydomain ". ./.bash_profile 2>&1 1>/dev/null;cd ${ADMIN_SCRIPTS_HOME};./adstpall.sh apps/${APPSPW}" 2>&1 |tee -a ${LOG}

Or, maybe you'd like to do something in SQL*Plus on a remote system?

ssh applmgr@myappstier.mydomain “. ./.bash_profile 2>&1 1>/dev/null;sqlplus apps/${APPSPW}” <&1 1>>${LOG}
select sysdate from dual;
EOF

This will redirect stderr to stdout, and send both to your logfile (${LOG}). Pay close attention to the line containing the EOF. It has to be the ONLY thing on the line (not even a trailing space!)

James

Wednesday, March 14, 2012

What drives E-Business Suite upgrades?



You'd think it would be new features, or security requirements. But, apparently, it's Oracle's end-of-support deadlines...


...at least according to 73% of the 327 OAUG members that responded to a survey.

UPDATE:  Here's the link to the full OAUG report:


James

Wednesday, March 7, 2012

Stupid Unix Tricks... Part 1

So, let's say you're trying to figure out if the database (or E-Business Suite) is down. Now, the logical way is use the Unix commands ps and grep to check for a particular process. Generally speaking, we would look for the SMON process for that particular instance.

However, maybe you're looking for something else that has multiple processes and you want to see that they're all shut down.

We're going to use a database as an example (largely because I assume you are familiar with the database). The basic command would be:

ps -ef|grep ora_smon_PROD
oracle 10445 6643 0 15:32 pts/0 00:00:00 grep ora_smon_PROD
oracle 19710 1 0 Feb28 ? 00:00:36 ora_smon_PROD

However, the problem here is that it also gives our grep command. To get around that, we can strip it out using grep -v grep (which would strip from our results anything that contains the string grep). Additionally, maybe we want to get something we can use in an if statement. The simplest way to do that is to count the number of lines returned by the command. That can be done by piping the output through the wc -l command. Our final command will look like this:

ps -ef|grep ora_smon_PROD|grep -v grep |wc -l

So, assuming that we just wanted to look for SMON we can build our if statement like this:

if [ `ps -ef |grep ora_smon_PROD|grep -v grep |wc -l` -gt 0 ]; then
   echo "SMON is UP"
else
   echo "SMON is DOWN"
fi

Now, let's assume that you want to check for PMON as instead:

if [ `ps -ef |grep ora_pmon_PROD|grep -v grep |wc -l` -gt 0 ]; then
   echo "PMON is UP"
else
   echo "PMON is DOWN"
fi

But what if you wanted to make sure that they were BOTH down?

if [ `ps -ef |grep -e ora_pmon_PROD -e ora_smon_PROD|grep -v grep |wc -l` -gt 0 ]; then
   echo "PMON and SMON are UP"
else
   echo "PMON and SMON are DOWN"
fi

The key here is grep -e. Because grep allows you to use the -e flag more than once per invocation, you can specify multiple strings to search for. Multiple -e strings are treated as a logical "or" by grep when it's parsing the input.

As with everything, your results may vary. Different platforms may have different versions of grep with different capabilities. This example was tested on Linux.

James

Thursday, February 16, 2012

Password-less Login Using SSH Pre-Shared Keys



Way back when I started working with Unix (otherwise known as "the olden days" or "days of yore"), one of the tricks we used was a concept known as "remote login" and the "Berkeley R commands". This was based on a number of things, most of them depending on either the /etc/hosts.equiv or the ${HOME}/.rhosts file to establish the trusting relationship. Configuring these would allow you the ability to do some really neat things. Among them, copying files from one host to another using a command like rcp /tmp/file user@remotehost:/tmp/file without being asked for a password. This made for some really neat scripting opportunities and made it much easier to manage multiple systems.

Unfortunately, the Berkeley "R" commands are notoriously insecure. The way that the trusting was done was based entirely on the username and hostname of the remote user on the remote host. Literally, you told the server to trust "jmorrow@remotehost.mydomain.com". The problem with this is that all that was required was knowledge of the trusting relationship. All you had to do was set up a machine named "remotehost.mydomain.com" and create a "jmorrow" user on it. Then you could go anywhere that that trusting relationship allowed.

Fortunately for us, the cool features that were introduced by the Berkeley "R" commands are implemented much more securely in the SSH protocol and toolset.

The SSH Protocol can use pre-shared keys to establish trusting relationships. In this case, each node has both a public and a private key. When the client talks to the server, the client offers a " key". The server, which maintains a list of trusted "public keys", then compares that key to it's database to determine if it actually trusts the client. If the client passes the test, then it is allowed in without any further challenge. This can be very useful for administrators, automated file transfer, also for scripting interactions between hosts. Note that this is not a "Machine A" trusts "Machine B" relationship. It is "user@machinea" trusts "user@machineb".

For the purposes of this article, the "server" is the node that you are logging into from the "client". So, the "server" is the one that is doing the trusting. The terms "server" and "client" refer only to the role being played by each component in the ssh communications session. I should also mention that Oracle Real Application Clusters (RAC) depends on this relationship as well.

Generate your public/private key pairs [Both Client and Server]

The server (user@host) needs to have one, and each client (user@host) that is being trusted needs to have one.

Execute these two commands (in a Unix/Linux environment) to create both your rsa and your dsa keys. You will be prompted for a location to store the files (typically under ${HOME}/.ssh), and for a passphrase. In all cases, it's ok to accept the defaults.

ssh-keygen -t rsa
ssh-keygen -t dsa

If you know you don't want to use a passphrase, you could generate the keys with these two commands:

ssh-keygen -t rsa -f ${HOME}/.ssh/id_rsa -N ""
ssh-keygen -t dsa -f ${HOME}/.ssh/id_dsa -N ""

Transfer the public key files from the client to the server

I prefer to make sure that I have a uniquely named copy of the public keys (makes it easier to transfer to another box when first establishing the relationship).

cd ${HOME}/.ssh
ls -1 id_[dr]sa.pub |while read LINE
do
cp ${LINE} ${LINE}.`whoami`@`hostname -s`
done

Now copy these files to the server:

scp ${LINE}.`whoami`@`hostname -s` trustinguser@trustingserver:.ssh/.

Copy the public keys you're trusting into the authorized_keys file

Here, we'll need to put those keys into the authorized_keys file. Do this for each of the files that you transferred in the previous step.

cd ${HOME}/.ssh
cat >> authorized_keys

Make sure permissions are correct

If the permissions on these files are too open, the trusting relationship will not work. Here are my recommendations:

chmod 600 ${HOME}/.ssh/auth*
chmod 700 ${HOME}/.ssh
chmod 644 ${HOME}/.ssh/id_[dr]sa.pub*
chmod 600 ${HOME}/.ssh/id_[dr]sa

Now, you should be able to ssh from the client to the server witout being prompted for a password.

James

Tuesday, February 7, 2012

Spreadsheet Risk (and why ad-hoc reporting tools make me twitchy)


First, let me say that I'm a DBA, not an accountant. We tend to trust databases to hold and organize data. We use applications and reports developed by professional developers to retrieve that data. Those applications and reports go through a software development lifecycle for a reason: to make sure that they are accurate.

Despite this, many professional developers aren't writing "well-tuned code". They're generally happy to get the right results and, as long as it isn't painfully slow, performance is either an afterthought or the DBA's problem. I've got news for you... some 80% of performance issues are caused by poorly tuned code!  

This is not to denigrate developers.  I'm saying this mostly to prove a point:  if you can't reliably expect well-tuned code from a professional developer, you're insane if you expect anything better from end-users with an ad-hoc query tool.

This is one reason why tools that allow end-users to produce their own reports (Discoverer, ADI, et. al.) have always made me (and, I'm sure other DBAs out there) somewhat nervous.

The other reason I've always been a little twitchy about those tools is accuracy. With professional developers, they understand the need for testing and accuracy. End-users, however, frequently don't have that same appreciation. So, when you allow end-users to develop their own queries and reports, or to extract and manipulate data in a spreadsheet, what kind of risks are you taking?

CIO Magazine has a thought-provoking article on this. Definitely worth a read.


James