- Lea's Blog
Lea in the Box
03/24/16

How Lea imoves out and is evaluated

SEBIT is a global digital content publisher and educational technology provider. In its pedagogical vision,  SEBIT is a strong proponent of Competency-Based Education (CBE), not only for higher education but also for K12 grades. Therefore, both the developed content and the technology is centered around competency structuring. The analytics tools set in LEA’s Box as well work on competency structures. The
data extracted from various sources (eLearning products or product parts) needs to be linked to competencies, so that LEA’s Box tool set can render analytics results. Being a piloting partner in LEA’s Box project, SEBIT studied ways of cross-validating the competency structures and the products used in piloting. This activity helps improve the production process.
 

Authentic direct assessments and continual evaluation are corner stones of CBE, not only to support  teachers in their timely feedback, but also for the students themselves, to track their own progress and self-regulate. Learning Analytics is the stage that potentially takes raw data to actionable insights. Yet, affective presentation is crucial to cause any change or affirmation in a learners course of action. During pilots run by SEBIT, LEA’s Box tools were used by teachers and students for competency-based tracking of performance and decision making. Not all students are accustomed to tracking their performance in this manner. LEA’s Box pilots were a great opportunity to observe and measure the effects of various
presentation techniques. Measurement of behavioral intentions of users towards a technology product is another achievement which is facilitated by the project.  A Technology Acceptance Model (TAM3) construct is adopted to measure Perceived Usefulness and Perceived Ease of Use which are shown to drive the user attitude towards using technology products such as LEA’s Box tools.


Elizabeth D. Herman for The New York Times

Teaching social-emotional skills is often seen as a way to move away from a narrow focus on test scores, and to consider instead the whole child. It may seem contradictory, then, to test for those skills. In education, however, the adage is “what’s measured gets treasured.”

US schools began emphasizing social-emotional learning around 2011, after an analysis of 213 school-based programs teaching such skills found that they improved academic achievement by 11 percentile points. But the race to test for so-called social-emotional skills has raised alarms even among the biggest proponents of teaching them, who warn that the definitions are unclear and the tests faulty. There is little agreement on what skills matter: Self-control? Empathy? Perseverance? Joy?

“There are so many ways to do this wrong,” said Camille A. Farrington, a researcher at the University of Chicago who is working with a network of schools across the country to measure the development of social-emotional skills. “In education, we have a great track record of finding the wrong way to do stuff.”

Next year, the National Assessment of Educational Progress, a test of students in grades four, eight and 12 that is often referred to as the nation’s report card, will include questions about students’ social-emotional skills. A well-known international test, PISA, is moving toward the same.

The biggest concern about testing for social-emotional skills is that it typically relies on surveys asking students to evaluate recent behaviors or mind-sets, like how many days they remembered their homework, or if they consider themselves hard workers. This makes the testing highly susceptible to fakery and subjectivity. In their paper published in May, Dr. Duckworth and David Yeager argued that even if students do not fake their answers, the tests provide incentive for “superficial parroting” rather than real changes in mind-set.

Noah Bookman, the chief accountability officer for the districts, said he understood the concerns about testing. But, he said, “This work is so phenomenally important to the success of our kids in school and life. In some ways, we worry as much if not more about the possibility that these indicators remain on the back burner.”

http://www.nytimes.com/2016/03/01/us/testing-for-joy-and-grit-schools-nationwide-push-to-measure-students-emotional-skills.html

Linda Davidson/The Washington Post/Getty Images
Linda Davidson/The Washington Post/Getty Images

Students shouldn't spend more than 2 percent of time taking tests, the Education Department says.

The federal government requires states to test students 17 times before graduation: annually in math and reading in grades 3 through 8, once in those subjects during high school, and then once in science during elementary, middle and high school.

If states didn’t reach annual proficiency goals, they could be subject to a series of sanctions. And that type of accountability system resulted in states and school districts piling on their own tests to ensure students were on track to hit those proficiency marks.

States and school districts should evaluate the number of tests they administer to students and eliminate any deemed ineffective or duplicative, the Department of Education recommended in new assessment guidance released in October 2015.

The guidance emphasizes that while some tests are for accountability purposes, the vast majority of assessments should be tools in a broader strategy to improve teachers and learning.

“No single assessment should ever be the sole factor in making an educational decision about a student, an educator, or a school,” the guidance reads.

“Good assessments are a part of the learning experience, and a critical tool to make sure that all students, including our most disadvantaged students, are learning,” said John King, No. 2 at the Education Department who has been selected to replace Duncan. “But duplicative, unnecessary or poor-quality, low-level tests subtract from learning time and undermine instruction. There are too many tests that do not provide useful information.”

http://www.usnews.com/news/articles/2015/10/24/education-department-recommends-less-testing

In October 2015, McGraw-Hill, one of the largest publishers in the world, has released a study report of a survey on more than 2,600 U.S. college students about the impact of technology on study habits. 87% of the students said learning analytics would improve their academic performance, and nearly 66% of those who use it already say its impact is “very” or “extremely” positive. Students embrace analytics as a form of continual feedback on their progress. If their school is using a Social LMS to manage the online learning activity in the campus and blend analytics results in the personalized activity stream at the student’s home page, it is perceived much like the feedback they receive after taking any action on social media. It helps them learn more effectively through continual feedback. Students also responded that adaptive learning tools are the most effective study technology, beating out learning management systems.

In higher education, learning analytics is taking off and as more vendors incorporate analytics in their toolset offerings the pressure on standardization builds up. There are two standards bodies in eLearning world: IMS who developed LTI for learning tools, QTI for assessment and ADL who developed SCORM for content standards.

 

Of these two, ADL was the first to notice the need for a standard to represent and store data. As learning took place more and more in a digital environment, data collected and stored is growing. Learning analytics tools are burgeoning, but each tool has its own requirements for its input data. The stored data may not be able to fulfil those requirements. ADL’s vision was to define a representation so that all tools could use the same data set. The result was the Tin Can API (sometimes known as the Experience API or xAPI). The xAPI was released as version 1.0 in April 2013. There are currently over 160 adopters.

IMS was late. Version 1.0 of the Caliper standard was released in October 2015, but the first certification was granted that very same month and the standard was adopted, especially in higher education sector in USA very rapidly. Caliper Analytics enables the collection of high rate real-time event data, via software sensors (or the Sensor API™) and information models (known as metric profiles).

It doesn’t make sense to have two standards with the exact same goals for interoperability. There were attempts from the xAPI community to build a bridge, but IMS was reluctant to find common ground. It seems there will be a split in the market, and both standards will become de facto at some segment. Caliper will potentially be a de facto standard in US higher education market and xAPI will be de facto in enterprise training and schools, especially in Europe.

Unlike SCORM or LTI standards, the standards on data do not bring the same easy set of incentives for vendors. Data accumulated in an LMS is used by that LMS to generate some reports and add value to the product. If the LMS vendor standardizes the data store learning analytics software vendors can also use it, but there is no direct benefit for the LMS vendor. Therefore, the growth in adoption needs to be driven by customer demand, as the institutions increasingly add this requirement in their requests for proposals. Secondly, publishers and content providers are likely to be early adopters. Giving institutions access to data that increases the likelihood they will remain customers is a win-win. For example, Kaltura's open source video platform was one of the first products to receive conformance certification based on Caliper. Blackboard, D2L, Elsevier, Intellify Learning, Learning Objects, McGraw-Hill Education, and VitalSource Technologies are among the other ed tech providers to achieve conformance certification for their products.

 

 

The ultimate aim for all of this data collection is to help students self-regulate their own learning by empowering them with actionable insights or actionable predictions out of data. Available data is just the starting point. User experience design is crucial. Imagine tools that let’s learners critically self examine their own performance, where they're spending their time, their study habits, what they want out of their educational experience — and really how they can refine their thinking. How would these tools look like constitutes most of their impact.

 

Have you already heard about “Steve Jobs Schools”? In this type of schools, you might not find any pencils or exercise books. Nearly everything is done via an iPad: Homework, exercises, exams. Every student has an own iPad and they are used at school as well as at home. Learning analytics are continuously gathered, processed and visualized for the learners themselves as well as for the teachers. It enables teachers to spend more time focusing on the students’ individual strengths and weaknesses. Currently there are around 20 Steve Jobs Schools, in particularly in the Netherlands, numbers growing. For more information see: http://stevejobsschool.world/

1 2 3 5 7 8 9 10 11 ... 13

Lea's Learning Analytics Blog

Learning analytics, educational data mining, formative assessment - all recent buzz words in educational research. In principle, the idea is to find theoretical frameworks, models, procedures, and smart tools to collect, aggregate, analyze, reason on and visualize large scale educational data. LEA’s BOX is a research and development project funded by the European Commission. The project aims at (a) making educational assessment and appraisal more goal-oriented, proactive, and beneficial for students, and (b) at enabling formative support of teachers and other educational stakeholders on a solid basis of a wide range of information about learners. That means, LEA’s BOX is a learning analytics toolbox that is intended to enable educators to perform competence-centered, multi-source learning analytics. More info at http://www.leas-box.eu!

Search

open source blog