Centralisation, Collaboration and Continuity
Here at Tyrell we pride ourselves at spotting trends and responding accordingly. What we are seeing now when companies are examining their storage strategy is the three ‘C’s , namely Centralisation, Collaboration and Continuity.
Centralisation used to be a suggestion that on the face of it made absolute sense, but in reality would strike fear into the hearts of IT/facility managers. The fear of the system sitting down and along with it the whole facility meant that you couldn’t realistically centralise everything without mirroring everything on a second solution. So, in reality you would have to buy two of everything. As storage was expensive to start with , buying two was out of reach but for the largest facilities.
What we are seeing now are storage systems that are very high bandwidth with multiple levels of redundancy – i.e not only can a drive fail but a whole node can fail without affecting the users continuing to work. Dual OS disks, redundant RAID controllers, redundant metadata controllers, redundant power supplies all contribute to peace of mind and trust in the hardware.
This combined with a tape based or cloud/data centre/offsite backup allows you to dip your toes into centralising all your assets onto one platform secure that you have a secure and reliable backup solution in place.
Suddenly the benefits from an IT perspective become more obvious. Setting up automated rules for data, permissions, access are all much simpler as all the data is in a central location. Whether you wish to do a simple back up to tape or provide stubs with tiered storage through to deep archive your options are all there and what’s more its scalable so you can start small and build to a full system.
There’s no mystery anymore to tools like Autodesk’s Shotgun or fTrack that many studios are using to track and manage their assets. To do this most effectively you need to be working off central storage that all users can see and read and write to. In the case of Nuke or Flame it is especially important that the storage has high enough bandwidth to allow users to work directly from that storage rather than localising and having two versions existing in separate locations at the same time. Again, this is now accessible to most as storage costs for high bandwidth and high availability have reduced drastically in recent years.
The next logical step is to collaborate across multiple facilities in multiple locations but in a single namespace.This is now a reality and possible without having to employ banks of IT experts to manage and implement it for you. A word of caution though it is still very dependent on distances between facilities and connections between facilities. However there is a solution for this that won’t break the bank and allows for central updating of assets in your Shotgun or Ftrack project again simplifying your pipeline workflow significantly when working across multiple sites.Then there is cloud based collaboration, it is possible for cloud based users to be working on material that is syncing with the central project in house and updating your asset management software.
In summary, multiple locations (real or cloud) are now all able to be interconnected in a real and useable manner. Useable is the key word here as it has always been possible but up till recently it has not been really possible due to the high cost of the infrastructure required.
We are now seeing contractual requirements from content owners that specifically state that an offsite back up and disaster recovery process has to be in place within a facility to ensure continuity in the event of a complete shutdown of the facility. There are a number of ways you can approach this whether it be tape based, hardware based or cloud based. Each one has it’s pro’s and con’s and generally people settle for a combination. The biggest issue for clients is that this is a non-billable cost.
However, it is a useful exercise to truly calculate the real cost of downtime in your organisation and look at the figure on paper. It tends to focus the mind and the true cost of failure allows you to make an educated decision to implement a proper DR/Backup strategy. It’s like insurance, you may pay it all your life and never have a claim. However if you do have an accident you’ll be pretty glad you have it.
We have one product that satisfies all of the above in an elegant and cost effective manner and that is Pixit media. https://www.pixitmedia.com/
High performance Parallel File System with limitless scale it can satisfy all of the three ‘C’s above and one more Cost. Because it uses standard hardware it is highly scalable without breaking the bank. It emerged from huge success in HPC processing huge amounts of data in environmental sciences such as seismic data and then recognising the many parallels in requirements within animation and Vfx, Pixit Media was born. Giving you the end user the tools to adapt the Metadata and information it natively gathers it saves time and money in developing your pipeline.
In truth every facility has different requirements from the smallest to the largest and it is difficult to do Pixit justice in a short blog. Give me a call and we can talk about your current set up and where more importantly you want/need to be and we can work together to put a proposal that is tailored for you that guarantees performance and allows you to implement the three ‘C’s with peace of mind.