Attended: Chris Gray, Sam Rowley, Song Y and Myself.
We looked at how the current validation documents were submitted, recorded (stored) and used by the QA dept.
- Historically documents were submitted in a paper format (in a box file) - these recorded and stored in the QA dept.
- Recently, validation documents were also submitted in an electronic format and these were stored in a shared drive. The folder structure for this was by faculty >> award code.
- We decided that trying to replicate this in the repository would not be appropriate or necessary - principally as faculties and organizational structures change. However, it was worth recording the faculty name in the metadata (somewhere) was useful for searching purposes - as it would be a key searchable field. The use case would be a user finding validations that were submitted by faculty.
- In resepct to the documents that are collected, these would be:
- Programme Specifications
- Handbook (Student and Award)
- Module Descriptor
- Mentor Handbook (Foundation Degrees)
- Validation Report (often in pdf format)
- Generic Validation Support Documents (could be multiple instances)
- Essentially, we noted that the key documentation of interest would be associated with validations that had been successful (not that un-successful validations wouldn't be interesting - it was just an issue of ethics). With this in mind, the QA documents could be 'graded' into the following 'types:
- Pre-Validation Documents (originally submitted)
- Validation Report (conditions for success)
- Post-Validation Documents (amended for success)
We discussed the feasibility using a program to 'extricate' and extract key words from the validation documents to assist in completing the task of entering key metadata that needs to be recorded by the DIVAS system. Essentially, the key document of interest is the 'Programme Specifications', which has some key fields that match the type of metadata that needs recording:
- Awarding Body
- Teaching Institution
- Accreditation by Professional / Statutory Body
- Final Awards
- Programme Title
- UCAS Codes (possibly not required for metadata)
- QAA Subject Benchmarking Group (possible not required for metadata)
- Date of Production
- University Faulty / School
- Method of Delivery (Face-2-Face, Blended
- Mode of Attendance / Delivery Method (e.g. PT / FT)
- TheSiS / Award Code
If this is technically possible, the idea is to use this functionality to populate fields in an interface that can be used to assist someone in uploading documents to HIVE. This interface would therefore assist the user in completing the following tasks:
- Input and record the key metadata for the validation documents (for all documents)
- Upload documents (as though they were a collection), also indicating which were 'Pre' and 'Post' validation documents (along with the main validation document)
Back to the LOM?
For some weeks I have been looking at the most appropriate/useful metadata scheme to use (in conjunction with HIVE). After my meeting with library colleagues, it was noted that using a simple scheme was appropriate, so it was assumed that Dublin Core may be the most useful. However, with the team now looking at using the API functionality of HIVE - it could be argued that through using a simple interface we can use a more complex scheme (like LOM) that offers a greater range of fields/attributes to use - as the user would not be intimidated by all the fields that needed to be populated (many of which would be extranious and confusing). I will look into which LOM fields could be used to record the key metadata for the project.
Work in progress
- Understanding and exploring the API functionality of HIVE for the purposes of creating a user friendly way of interacting with HIVE - to complete the following tasks:
- Uploading documents into HIVE
- Searching of HIVE
- Embedding an API for HIVE in NING
- Uploading documents into HIVE
- Investigating how to extracate data from word documents
- Investigate the LOM scheme for HIVE - what needs to be recorded
Useful report: Nine questions to guide you in choosing a metadata schema
No comments:
Post a Comment