Sorry for the long post but this issue needed some background for context.
We currently have a custom content type plugin based on 5.0 that we’ve re-architected in order to move to 7.0 for our on-prem Jive instance. Instead of the data being held inside the Jive database as part of the custom content type, we’ve moved it to a separate external database and will be surfacing it to users with custom tile connecting to a web service for the data (thanks Ryan Rutan and Ryan King for sowing the seeds of this solution during a chat at JiveWorld 11)
One of the features we would have given up with the move away from the custom content type is Jive’s native functionality such as search, @mentions, etc. We’re addressing this by simply publishing content type’s text as a native Jive HTML document. This gives us back the native search and @mention functionality and is looking like it will work quite well in practice.
However, migrating the 17,000+ custom content type records is looking tricky. Getting the data into the separate database is straightforward, but we’d also like to publish all of those records as HTML documents. Additionally, we’d like to retain original authorship and publish dates. We also need to store a reference to that HTML document (the documentID and contentID) in our external database, making the XML document migration framework unhelpful for this task.
We think that the REST API can almost get us there. Although we don’t have access to the publish date until the 3.6, we can hack this by updating that field directly to the database after the REST API has created the record (we need a reference to it anyway in our external database).
However, the gotcha is that if we post all those documents via the REST API, the activity engine will pick these up and flood user’s streams with all those new documents. I can disable the activity engine nodes during the migration, but Jive happily queue’s them up until one comes back online. This otherwise good design is throwing a wrench into our plan.
My question: Is there some way to “flush” this queue so that notifications do not go to the activity engine during the migration? And if so, will this also prevent the digest emails from picking up the new documents? We’re not adverse to database surgery if that’s what it takes… would just need to know what records need to be purged. Thanks in advance for anyone who can provide some insight to this.