Exadata Software is Released

Good new features announced, such as finding Flash Cache and Flash Log statistics on AWR, automatic ASM data redundancy check even shutting down a storage server by pressing the power button or through ILOM, preventing Flash Cache population in cell to cell rebalance, disabling SSH on Storage Servers and running CellCLI commands from Compute Nodes via new ExaCLI utility.

Find details in the MOS doc: Exadata release and patch (20131726) (Doc ID 2038073.1)
  • Oracle Exadata Database Machine X5-8 Support
    • New Exadata X5-8 Database Server
      • Exadata X5-8 updates the 8-socket database server to use the latest and fastest Intel Xeon E7-8895 v3 “Haswell-EX” processors with 18 cores (vs. 15 cores in X4-8) for 20% greater performance. The HBA no longer depends on battery to retain cached data, hence eliminating the need for preventive maintenance.
  • IPv6 Support
    • Compute nodes and storage servers are now enabled to use IPv6 for the management network, ILOM, and the client access network. This works for both bare metal and virtualized deployments.
  • Disabling SSH on Storage Servers
    • By default, SSH is enabled on storage servers. If required, you can "lock" the storage servers to disable SSH access.You can still perform operations on the cell using ExaCLI, which runs on compute nodes and communicates using https and REST APIs to a web service running on the cell.
  • Running CellCLI Commands from Compute Nodes
    • The new ExaCLI utility enables you to run CellCLI commands on cell nodes remotely from compute nodes. This is useful in cases where you locked the cell nodes by disabling SSH access.
  • Updating Database Nodes with patchmgr
    • Oracle Exadata database nodes (running releases later than, Oracle Exadata Virtual Server nodes (dom0), and Oracle Exadata Virtual Machines (domU) can be updated, rolled back, and backed up in a rolling and non-rolling fashion using patchmgr in addition to running dbnodeupdate.sh standalone.  Performing this update via patchmgr enables you to run a single command to update multiple nodes at the same time; you do not need to run dbnodeupdate.sh separately on each node. The patchmgr and dbnodeupdate.sh to use for this activity are shipped within the new dbserver.patch.zip which can be downloaded via document 1553103.1. See the maintenance guide for more details.
  • Creating Users and Roles
    • You can control which commands users can run by granting privileges to roles, and granting roles to users. For example, you can specify that a user can run the "list griddisk" command but not "alter griddisk". This level of control is useful in Cloud environments, where you might want to allow full access to the system to only a few users.
  • MTU size on database nodes not changed when updating from to
    • When updating database nodes from existing releases to MTU settings for infiniband devices will remain the same. However touching a file (touch /opt/oracle/EXADATA_UPDATE_MTU) before starting the update, enables you to automatically adjust the settings to the Exadata default of 65520 during the update.
  • Fixed Allocations for Databases in the Flash Cache
    •  The ALTER IORMPLAN command has a new attribute called flashcachesize which enables you to allocate a fixed amount of space in the flash cache for a database. The value specified in flashcachesize is a hard limit, which means that the database cannot use more than the specified value. This is different from the flashcachelimit value, which is a "soft" maximum: databases can exceed this value if the flash cache is not full.
  • Oracle Exadata Storage Statistics in AWR Reports
    • The Exadata Flash Cache Performance Statistics sections have been enhanced in the AWR report:
      • Added support for Columnar Flash Cache and Keep Cache.
      • Added a section on Flash Cache Performance Summary to summarize Exadata storage cell statistics along with database statistics.
      The Exadata Flash Log Statistics section in the AWR report now includes statistics for first writes to disk and flash.
  • Increased Maximum Numbers of Database Processes
    • Please see table below for the maximum number of database processes supported per database node.  These numbers are higher than in previous releases. The best practice is to keep the process count below these values. If a subset of your workload is running parallel queries, the maximum database process count will be between the "Number of Processes with No Parallel Queries" column and the "Number of Processes with All Running Parallel Queries" column.
  • Custom Diagnostic Package for Storage Server Alerts
    • Storage servers automatically collect customized diagnostic packages that include relevant logs and traces upon generating a cell alert. This applies to all cell alerts, including both hardware alerts and software alerts. The timely collection of the diagnostic information prevents rollover of critical logs.
  • kdump Operational for 8-Socket Database Nodes
    • In releases earlier than, kdump, a service that creates and stores kernel crash dumps, was disabled on Exadata 8-socket database nodes because generating the vmcore took too long and consumed too much space.  Starting with Exadata release, kdump is fully operational on 8-socket database nodes due to the further optimizations.
  • Redundancy Check When Powering Down the Storage Server
    • If you try to shut down gracefully a storage server by pressing the power button on the front or going through ILOM, the storage server performs an ASM data redundancy check. If shutting down the storage server could lead to an ASM disk group force dismount due to reduced data redundancy, the shutdown is aborted, and all three LEDs on all hard drives blink for 10 seconds to alert the user that shutting down the storage server is not safe. You should not attempt a hard reset on the storage server.
  • Specifying an IP Address for SNMP Traps
    • If the IP address associated with eth0 is not registered with ASR Manager, you can specify a different IP address using the new "fromIP" field in the "alter cell" command (for storage servers) or the "alter dbserver" command (for database servers).
  • Reverse Offload Improvements
    • Reverse offload from storage servers to database nodes is essential in providing a more uniform usage of all the database and storage CPU resources available in an Exadata environment. In most configurations, there are more database CPUs than storage CPUs, and the ratio may vary depending on the hardware generation and the number of database and cell nodes.
  • Cell to Cell Rebalance Preserves Flash Cache Population
    • When a hard disk hits a predictive failure or true failure, and data needs to be rebalanced out of it, some of the data that resides on this hard disk might have been cached on the flash disk, providing better latency and bandwidth accesses for this data. To maintain an application's current performance SLA, it is critical to rebalance the data while honoring the caching status of the different regions on the hard disk during the cell-to-cell offloaded rebalance.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: