Skip to main content

Currently Skimming:

Session 3: Big Data Issues in Manufacturing
Pages 41-49

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 41...
... Presentations were made by Jesse Margiotta, DARPA, and Wayne Ziegler, Army Research Laboratory. DATA NEEDS TO SUPPORT ICME DEVELOPMENT IN DARPA OPEN MANUFACTURING Jesse Margiotta, Technical Advisor, DARPA Dr.
From page 42...
... The project aims to develop rapid qualification of powder bed fusion additive manufacturing processes -- in particular, direct metal laser sintering (DMLS)
From page 43...
... • Microstructural models to predict stresses, grain size, strain hardening, and other variables. • Yield strength prediction tool.
From page 44...
... He responded that the Honeywell project is generating consider able amounts of data, but it is not considered big data based on today's definition. However, the materials manufacturing community does not currently have the ability to manage and analyze even this relatively modest amount of data.
From page 45...
... Metadata include information related to testing conditions and program information necessary for data sets to be completely understood and, if necessary, validated through additional testing. Data mining techniques are then applied to the data, and the mined data are used to inform models.
From page 46...
... The traditional work flow paradigm is to execute a task, collect and extract data, return a bigger data set, and pair it with separately recorded informa tion about the process. However, when the collection of data is separated from the collection of process information, fidelity drops.
From page 47...
... ARL is still addressing this first step, and Mr. Ziegler explained that the data collection decisions are iterative; once they have started collecting a particular data set, researchers will likely determine that they will need other data as well.
From page 48...
... A participant indicated that data ownership can be an obstacle to data sharing. He said that DOD contracts have many data requirements in them, and that aspect needs to be managed on the contractual side to ensure that the requirements are not cost prohibitive.
From page 49...
... Someone else remarked that the NSF repository is not user-friendly. Another participant pointed out that it is fairly common for universities to have permanent storage facilities available and gave the Deep Blue program at the University of Michigan as an example.1 However, other participants argued that these programs are expensive and do not always include metadata.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.