This is the eight post of the series of the Open Hardware Distribution and Documentation Working Group (which we’ve shortened to “DistDoc”). The group aims to produce a proof of concept for distributed open science hardware (OScH) manufacturing, exploring key aspects like quality, documentation, business models and more using as a starting point a paradigmatic case study. We hope the experience motivates others to discuss and implement new strategies for OScH expansion.
During the last sessions we were able to tease out the versions of OpenFlexure that would both present an interesting comparison and provide unique challenges — one focused on educational use and markets and the significantly higher grade, precision driven version to be used in research. This lean towards a comparative approach was largely in thanks to a session we had with people working in manufacturing and distribution — Mboa Lab, Make Good Collective and Michigan Neuroprosthetics — who demonstrated that depending on the administrative, financial and legal set up of manufacturers, there are significantly different values that are placed in collective approaches to distributed manufacturing.
During this meeting it became evident that we needed to clarify our audience and to focus more specifically on smaller, independent, regional manufacturers that would be able to bring products to their own communities. We identified this audience because generally when we did a quick assessment of value to collective models (with a notably small sample of manufacturers, this wasn’t a scientific study we were embarking on) independent manufacturers noted high value in all four points: quality mark, shared marketing and sales support, process improvement and knowledge sharing, and generally demonstrating that distributed manufacturing is a model that can work for open science hardware. On the other hand, the manufacturers working in a larger administrative apparatus (university) noted that though both process improvement and knowledge sharing and a quality mark ranked high in the value of a collective, distributed model, shared marketing and sales support and demonstration of the possibility of a distributed structure were not as important.
Based on this audience segmentation our next line of focus was to create three sub-groups that would begin working between meetings on furthering lines of inquiry that we collectively needed to address: Documentation and quality assurance process, Marketing materials and technical specs, Near-term administrative and legal arrangements.
With a clear way to move forward, as noted in the previous post in this series, the documentation and quality assurance group suggested that a next step in their process (in addition to continued development on SurveyStack and GitBuilding) would be to do a failure mode analysis that would be workshopped with the group at the next meeting. A failure mode analysis would identify in both versions of the microscope ten ways that the product would fail and then test and solve for these failures in the assembly process.
For instance, while a teacher is doing a demonstration, they twist a knob too far and it breaks. It shows that there can’t be over 75 pounds applied to the knob, so it tracks that limit and either solves for it in the product development process or is tracked in the documentation that the user will receive. The documentation and quality assurance team plans come back to the group with a model for failure analysis and a process to develop questions around it (for instance — what fails “in the field” and what fails during assembly) that would go into SurveyStack. We’ll then workshop that with the whole group in two weeks.
While it seems that the administrative and legal group spend a lot of time going back and forth on a quality mark, it’s creation, release, provided value and maintenance are time intensive. We decided to go the route of moving forward without the mark being the first focus and then if there is a high demand for it (as our initial tiny focus group indicated) that would be the point that we would prioritize and spend more time and resources thinking about how this is structured. Though several group members pushed for the mark to be the prioritized piece of administrative structures, we reached consensus through the reality that it requires resources to maintain — namely human resources — and with all of us committed to multiple other projects, we wanted to make sure that we weren’t overcommitting too early on.
Instead, we are going to focus efforts on developing and versioning the framework and vision for both 1) an administrative and legal set up of the distributed network, and 2) what a mark could look like, taking into consideration legal components, verification and partnership (what does it mean to get the mark, who maintains it and what resources are needed for it to scale) and finally how to market and advertise a mark — why would people care and know about it?
We’ll be coming back with news from the implementation of the failure mode analysis, stay tuned!