This document will attempt to outline some steps and recommendations for moving your data into a lower environment. Typically this would be moving production data to a lower testing or development environment. This document should work for 6.x and 7.x installations. For previous versions you can leave out many of these steps or refer to this document How To Replicate Your Jive 4.5 Production Environment to UAT and/or Steps to remember when moving data from Production to Dev. There will be some duplication in this document so reading these documents for background information is highly recommended.
Disclaimer: This is not a copy and paste the steps. You need knowledge of your database and environment and profiency with the command line. As most disclaimers go there is no guarantee and you must take precautions and any damage is not the responsibility of the author.
General Outline of Steps
- Stop all the services in production
- Copy all three databases in production (main application, analytics, activity)
- Copy the binary storage if you are using the file storage provider
- Copy files from the primary node (node.id, crypto/, themes/)
- Optional: copy activity streams and search index files
- Restart production
- Restore database in lower environment
- Restore file backups (binary storage, activity streams, search index, people index)
- Modify the database and other environment files
- Startup the lower environment
This document will cover some of the specifics in these steps as well as things to watch out for when executing this procedure. This document will not cover every command since it is expected that the user has the expertise necessary to manage the environments. For instance the steps for backing up and restoring your database are not covered.
Copy Production Artifacts
Picking up after the production instance is stopped and the database backups are started....
Determine the master node
Run the following query against the main database.
SELECT propvalue FROM jiveproperty WHERE name = 'jive.master.encryption.key.node';
The result of that query can be used to compare against the node.id of each production node. The next step assumes that the commands are run on this master node.
Create a tar file of the essential files
cd <jive installation>/applications/<instance>/home tar -czvf master-node.tgz node.id crypto/ themes/
Jive installation is usually /usr/local/jive but can be any folder in a non-root installation. Instance is usually sbs but additional instance names are possible.
Create a tar file of the attachments and images
Assuming the instance is using the FileStorageProvider the following query against the main database will reveal the location of the attachments and images.
SELECT propvalue FROM jiveproperty WHERE name = 'jive.storageprovider.FileStorageProvider.rootDirectory';
Using that value tar up the file storage. Warning this can be quite large so take appropriate measures to ensure there is space.
cd <propvalue from sql query> tar -cvf binaryfilestore.tar *
Note that the namespace is assumed to be the only folder in the root directory. If the installation has multiple namespaces then query for the "jive.storageprovider.namespace" and add that to the end of the tar command instead of *.
Also note that this tar command is a normal tar and not compressed. That is because the binary file store is not going to compress enough due to the content to make it worth the effort.
Optional: Copy the people index
tar -czvf people-search.tgz search/
Another note that this can be quite large depending on the number of users in the installation. The people index can be rebuilt in the new environment so this is not required.
Transfer the archive files to the machine where the restore will be taking place.
There should be an archive for the master node as well as for the file storage assuming the installation is using the file storage provider and not the database for file attachments and images. Of course the optional steps for the people search index.
Optional: Copy search and activity engine files
This one is an exercise left to the reader. Depending on the size of the installation the number of files and size of those files could be incredibly large. Since both the activity engine and search index can be rebuilt that is usually the easier method. Activity engine rebuilds can be quite long so depending on the environment and ability to transfer these large file collections this could save time. That is not usually the case and rebuilding is both easier and faster. This also assume that the installation is using a local search service. Using the cloud search service would negate the ability to copy the search index.
Locations for these file stores.
Activity: <jive installation>/services/eae-service/var
Search: <jive installation>/services/search-service/var
There are other steps involved in copying these two services. Including the configuration files. The details on this step will have to be added to this document later. Quick note would be to pay attention to the node.id on the activity engine and for the search the tenancy files.
Production instance can be started now
Restore in the Lower Environment
Using the appropriate commands for the database restore the three databases (main, activity, analytics).
Setup the environment
Not covering all the details here. Again it is assumed that the reader has a full understanding of how to setup a Jive environment both clustered and non-clustered.
The initial startup in the lower environment will only involve a single machine as well as one or more activity nodes and search nodes.
Prepare the binary file store
Usually this is a shared folder (NFS mount). Obviously that will depend on the environment. So the first part of this setup will be to create or mount the folder that will be used in the lower environment. Once that folder is setup the file created during the production copy will need to be made available. Since this is typically a very large file the specifics are not included in this document.
cd <path/to/new/jive/binstore/root> tar -xvf <path/to/production/copy>/binaryfilestore.tar
Prepare the web node
First step in preparing the web node is to extract all the files that were copied from the production node.
cd <jive installation>/applications/<instance>/home tar -xzvf path-to/master-node.tgz
If the people index was copied then it can also be extracted at this time. Still in home folder.
tar -xzvf path-to/people-search.tgz
Create the jive_startup.xml file
The web node should be ready now with one exception. The jive_startup.xml file will need to be configured for the new database. This file was not included in the copy from above but it is one of those options that could have been copied from the production node. It was not included because it connects to the production database and to be safe it was left out. Still this file needs to be present so copy it manually at this time into the applications/<instance>/home folder and modify. The two key items that need to be modified are the database URL information so that it points the restored primary database and the passwor. For the password element remove the attribute for "encrypted" and replace the encrypted password with a plain text password for the newly restored database. On startup Jive will encrypt the password so that it does not remain on disk in unencrypted form.
Prepare the database
Attached is the framework for an SQL script to use in preparing the database before starting up the environment. There are a number of sections with short explanations of the purpose of each update or insert. The values that need input are bracketed with less than and greater than symbols <>. Take a look through this script and update it to fit the environment. It is strongly suggested to run one statement at a time and verify that things are updating as expected. Pay special attention to the options for masking emails and turning off email sending and receiving. It is almost always advisable to turn off the email and prevent accidental sends and receives.
Startup the services
Everything should be ready to have a running system. Start the activity node(s) and search node(s) prior to starting the web node. If everything was configured properly the web node will be able to talk to both of those as soon as it starts running.
One the web node is running login to the admin console. There are a few things that need to be configured in order to get the system back to a running state. Not all of these are required depending on the lower environment especially if it is not clustered.
- Vist the System -> Settings page.
- On the first screen configure the activity node if it is not already showing up. Typically it will already be running and connected if the database script was correct. Refresh a few times and the queue should show up indicating the activity streams are being rebuilt.
- Next navigate to the analytics settings and put in the database password for the newly restored analytics database. Then enable the analytics. This was disabled because there we no way currently to change the password and re-encrypt it properly.
- Visit the plugins page and re-upload the plugins. This may not be required depending on choices made in the database script. Don't restart yet.
- If the lower system is a clustered environment then go to the cache settings and setup the cache server. That will enable setting up the remaining cluster nodes.
This is a good time to check all the settings. Make sure everything appears to be working as needed. Email can be re-enabled if you have a dummy SMTP server or something in the environment that will capture emails instead of sending them. Again this depends on the environment.
When everything looks like it setup properly visit the System -> Settings -> Search page and kick of a rebuild of the content index. If the people search was not copied then also kick off a rebuild of the user index. If the lower environment is a cluster wait to start the user index rebuild until all the members of the cluster are up and running.
A restart of course will be required if plugins were uploaded. Usually a restart is recommended once all this is completed just to see that everything is running and working as expected. Then configure the cluster and start the user search index rebuild.
This is a scratch note about clearing the boxes if they were previously running
Clear the boxes
-- cd <jive home>/voldemort/config
----- rm cluster.xml
----- rm server.properties
----- rm stores.xml
----- rm -rf .temp
---- rm -rf .version
- eae server (optional copy eae server files during downtime)
-- clear services/eae/etc/.json
-- Clear the services/eae-service/var/data directory
-- Clear the services/eae-service/etc/config and node.id file.
- search (optional copy search directory indexes during downtime)
-- clear services/search-service/var
-- Clear tomcat work directory rm -rf <jive_home>/var/work/sbs
-- cd <jive_home>/applications/sbs/home/
-- Clear jiveHome attachment and cache directories
---- rm -f ./attachments/*.txt
---- rm -f ./attachments/cache/*
---- rm -f ./images/*.bin
---- rm -f ./images/cache/*
---- rm -rf ./documents/*
---- rm -rf ./cache/pagecache/*
---- rm -rf ./cache/jiveSBS/*
---- rm -rf ./cache/<name of your storage/site><anything but pagecache and jiveSBS>
---- rm -f ./www/resources/scripts/gen/*
---- rm -rf ./plugins/*
---- rm -rf ./themes/<your theme names> <leave custom and palette folders>
---- rm ./node.id
- All boxes clear all logs (rm -rf <jive_home>/var/logs/*)