Using Data to Improve Student Outcomes

Explore top LinkedIn content from expert professionals.

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,368 followers

    Incrementality testing is crucial for evaluating the effectiveness of marketing campaigns because it helps marketers determine the true impact of their efforts. Without this testing, it's difficult to know whether observed changes in user behavior or sales were actually caused by the marketing campaign or if they would have occurred naturally. By measuring incrementality, marketers can attribute changes in key metrics directly to their campaign actions and optimize future strategies based on concrete data. In this blog written by the data scientist team from Expedia Group, a detailed guide is shared on how to measure marketing campaign incrementality through geo-testing. Geo-testing allows marketers to split regions into control and treatment groups to observe the true impact of a campaign. The guide breaks the process down into three main stages: - The first stage is pre-testing, where the team determines the appropriate geographical granularity—whether to use states, Designated Market Areas (DMAs), or zip codes. They then strategically select a subset of available regions and assign them to control and treatment groups. It's crucial to validate these selections using statistical tests to ensure that the regions are comparable and the split is sound. - The second stage is the test itself, where the marketing intervention is applied to the treatment group. During this phase, the team must closely monitor business performance, collect data, and address any issues that may arise.  - The third stage is post-test analysis. Rather than immediately measuring the campaign's lift, the team recommends waiting for a "cooldown" period to capture any delayed effects. This waiting period also allows for control and treatment groups to converge again, confirming that the campaign's impact has ended and ensuring the model hasn’t decayed. This structure helps calculate Incremental Return on Advertising spending, answering questions like “How do we measure the sales directly driven by our marketing efforts?” and “Where should we allocate future marketing spend?” The blog serves as a valuable reference for those looking for more technical insights, including software tools used in this process. #datascience #marketing #measurement #incrementality #analysis #experimentation – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/gWKzX8X2 

  • View profile for Ashish Majumdar

    CHRO | Strategic Global HR Leader | Healthcare HR Transformation Specialist | Talent Management Catalyst | Efficiency Champion | Executive Coach | Diversity, Equity, and Inclusion Advocate

    13,736 followers

    You've just launched a reskilling program aimed at boosting digital literacy across your organization. Now, the big question is: how do you measure its success? To answer that, a combination of hard data and real-world feedback is key. Take the example of AT&T, which famously invested $1 billion in reskilling its workforce for the digital age. They tracked success through KPIs like training completion rates and skill acquisition. Post-training, they saw a marked increase in employees' ability to handle new technologies, evidenced by improved performance metrics. But metrics only tell part of the story. Gathering qualitative feedback is equally important. IBM, for instance, uses surveys and pulse checks to gauge how employees feel about their upskilling efforts. This feedback allows them to tweak programs in real-time, ensuring that learning remains relevant and engaging. Lastly, consider long-term evaluation. Adobe ties reskilling outcomes to annual performance reviews, allowing them to see if the new skills are leading to sustained improvements. This holistic approach—combining KPIs, feedback, and long-term tracking—ensures that reskilling initiatives not only deliver immediate results but also contribute to lasting change. Are you ready to measure the true impact of your reskilling efforts? #hr #chro #reskilling #datainsights #employeedevelopment #employeeskilling

  • View profile for Kevin Kruse

    NY Times Times Bestselling Author | Founder, LEADx | Keynote Speaker on Leadership, Emotional Intelligence, and Employee Engagement

    46,213 followers

    *** SPOILER *** Some early data from our 2025 LEADx Leadership Development Benchmark Report that I’m too eager to hold back: MOST leadership development professionals DO NOT MEASURE LEVELS 3&4 of the Kirkpatrick model (behavior change & impact). 41% measure level 3 (behavior change) 24% measure level 4 (impact) Meanwhile, 92% measure learner reaction. I mean, I know learner reaction is easier to measure. But if I have to choose ONE level to devote my time, energy, and budget to… And ONE level to share with senior leaders… I’m at LEAST choosing behavior change! I can’t help but think: If you don’t measure it, good luck delivering on it. 🤷♂️ This is why I always advocate to FLIP the Kirkpatrick Model. Before you even begin training, think about the impact you want to have and the behaviors you’ll need to change to get there. FIRST, set up a plan to MEASURE baseline, progress, and change. THEN, start training. Begin with the end in mind! ___ P.S. If you can’t find the time or budget to measure at least level 3, you probably want to rethink your program. There might be a simple, creative solution. Or, you might need to change vendors. ___ P.P.S EXAMPLE SIMPLE WAY TO MEASURE LEVELS 3&4 Here’s a simple, data-informed example: You want to boost team engagement because it’s linked to your org’s goals to: - improve retention - improve productivity You follow a five-step process: 1. Measure team engagement and manager effectiveness (i.e., a CAT Scan 180 assessment). 2. Locate top areas for improvement (i.e., “effective one-on-one meetings” and “psychological safety”). 3. Train leaders on the top three behaviors holding back team engagement. 4. Pull learning through with exercises, job aids, monthly power hours to discuss with peers and an expert coach. 5. Re-measure team engagement and manager effectiveness. You should see measurable improvement, and your new focus areas for next year. We do the above with clients every year... ___ P.P.S. I find it funny that I took a lot of heat for suggesting we flip the Kirkpatrick model, only to find that most people don’t even measure levels 3&4…😂

  • View profile for Dr. Alaina Szlachta

    Data strategy advisor and implementor for training and coaching firms • Author • Founder • Measurement Architect •

    8,085 followers

    We're measuring learning at the wrong time. And it's costing us real impact. Most learning providers measure before and after their programs. But here's what I've discovered after years of analyzing client outcomes: when we measure should be 100% determined by what we hope will happen AFTER learning, not during it. With this idea in mind, our measurement strategies change significantly: Compliance programs? Don't wait until deadlines to measure. Measure weekly so clients can support their people in actually becoming compliant. Skills development? If learners apply those skills daily, measure daily. If weekly, measure weekly. The breakthrough happens when we shift from measuring around learning experiences to measuring around desired workplace results. Here's how I've been thinking about when to measure, and it's made a real difference in the quality of the data I receive from my measurement efforts! For compliance programs: Design measurement that helps organizations support their people in meeting requirements, not just tracking completion. For behavior change programs: Match measurement frequency to how often learners have opportunities to apply what they learned. Answering "when to measure" is actually the secret backdoor to figuring out "what to measure." The simple take-away? Stop measuring your programs. Start measuring new behaviors participants are applying in the flow of work. Here's a simple flow chart to help you get started: https://lnkd.in/gB5Yh8nm What's been your experience with measurement timing? Have you found that when you measure changes the results you can demonstrate? #learningproviders #measurementmethods #datastrategy

  • View profile for Mathieu Le Guével

    Fixing Data Governance | Stewardship, Data Catalog, Ownership

    3,597 followers

    𝗠𝗼𝘀𝘁 𝘀𝘁𝗲𝘄𝗮𝗿𝗱𝘀𝗵𝗶𝗽 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝘀 𝗳𝗮𝗶𝗹 𝘀𝗶𝗹𝗲𝗻𝘁𝗹𝘆. Not because the work isn't done. Because nobody knows if it's working. I've seen this pattern with almost every data team I've worked with. Smart people, real effort, zero visibility on impact. The CDO asks for results. The stewards show activity. Nobody's actually wrong, but nobody's aligned either. The fix isn't a 20-metric dashboard. That's just noise with extra steps. What actually works is simpler: 𝗠𝗮𝘁𝗰𝗵 𝘆𝗼𝘂𝗿 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 𝘁𝗼 𝘆𝗼𝘂𝗿 𝗺𝗮𝘁𝘂𝗿𝗶𝘁𝘆. Early stage (L1–L2): Don't oversell. Prove the basics are in place. Do your critical assets have an owner? Is the complaint volume going down? How many manual fixes are still happening each month? At this stage, existence beats sophistication. Mid stage (L3): Shift to consistency. Are policies actually followed or just written? Are quality scores improving across domains? Are root causes fixed or just patched until next quarter? Mature stage (L4–L5): Now you can talk money. What's the cost of poor data quality trending? Do business teams trust the data enough to act on it without double-checking? Are stewardship SLAs respected? Here’s the hard truth: An L1 program reporting L5 metrics looks incompetent. An L5 program reporting L1 metrics looks irrelevant. Measurement isn't bureaucracy. It's the only way to make stewardship visible to the people who fund it. Where does your program currently stand? ——— 📌 Save it for later. 👋 Follow Mathieu Le Guével to build AI-ready Data.

  • View profile for Ann-Murray Brown🇯🇲🇳🇱

    Monitoring and Evaluation | Facilitator | Gender, Diversity & Inclusion

    127,241 followers

    Your project started without a baseline? Welcome to 90% of real-world Monitoring and Evaluation. Most programmes launch with urgency, political pressure, or donor timelines, not perfect data systems. That doesn’t mean you can’t measure change. It just means you need to reconstruct the “before” using the tools seasoned evaluators rely on: 🔹 Start with what already exists Intake forms, early reports, planning documents, grant proposals, even if they weren’t created for MEL, they often contain reference points you can extract. 🔹 Use recall methods strategically Ask participants and staff to describe conditions before the intervention, but anchor their memory to major events: ↳ “Before the school opened…” ↳“Before the water point was installed…” This reduces bias and increases accuracy. 🔹 Pull secondary data to fill the gaps Census tables, ministry surveys, NGO assessments, anything close in geography and timeframe can provide a credible reference. 🔹 Triangulate relentlessly Never rely on one source. Cross-check community recall with government data, staff insights, and documentation. Retrospective baselines aren’t shortcuts. They’re structured, defensible methods for rebuilding the past and they’re what experienced evaluators use when perfection isn’t possible (which is most of the time). 🔥 If you want more practical MEL techniques like this with no jargon, no theory-only talk, join my mailing list for weekly insights that will sharpen your practice. #Baseline

  • View profile for William Warshauer

    CEO at TechnoServe, International Development Nonprofit

    9,260 followers

    I wish we talked more about ROI in the nonprofit world. Funding is limited, so we have to ensure it creates the greatest possible impact. Right? But I still often see vague metrics like: We “reached” or “served” X number of people. That’s an output, not an outcome. It’s fine to measure outputs: number of trainings, meetings, distributions, etc. But it’s more important to measure the outCOMES: How did people’s lives measurably improve as a result of this work? I get that it can be hard to measure outcomes, especially in areas that don’t lend themselves to quantitative impact (governance, capacity building, etc.) But if the goal is to improve people’s economic power, then that’s what we should measure. And we should do it in a way that: ➡️ 1) Shows the ROI based on program cost ➡️ 2) Calculates attribution–i.e., how much revenue change is attributable to your organization’s work, vs. external market factors? ➡️ 3) Doesn’t assume unrealistic lasting impact. (For instance, TechnoServe assumes continued revenue improvement as a result of our work for THREE years afterward--no more. We often see indications that the impact lasts longer. But until we have more data from post-project evaluations--too rarely funded--we use a conservative, low-end estimate.) 🔸🔸🔸 So here's a quick breakdown of TechnoServe's 2025 ROI: ♦️ This past year, the people TechnoServe worked with gained an average $5.70 in additional revenue for every $1 we spent working with them 👇 ♦️ The ROI of all our closed projects last year ranged from over 20-to-1 to less than 1. ♦️ For very low-ROI projects: Some were simply failures. Others were pilots that we hope will attain a positive ROI as they scale. ♦️ For very high-ROI projects: Some may not reflect true cost-effectiveness--e.g., they might have had unusually low starting points due to COVID. Others may be truly impactful projects, where we’ll seek to replicate successful elements where we can. But we’ll be looking at the WHY behind all these scores to see what we can improve or scale. 👉 I think the development world owes it to the people fighting poverty worldwide to find what works and fix what doesn't. And that starts with working as hard as possible to measure things right.

  • View profile for Magnat Kakule Mutsindwa

    Regional MEAL Expert & Consultant | Trainer & Coach | 15+ yrs across 15 countries | Driving systems, strategy, evaluation & performance | Major donor programmes (USAID, EU, UN, World Bank)

    62,074 followers

    Monitoring and Evaluation (M&E) systems form the backbone of program accountability, learning, and improvement. This document, Developing a Monitoring and Evaluation Plan, offers a step-by-step guide for creating robust and responsive M&E frameworks tailored to the complexities of humanitarian and development programs. It emphasizes the importance of aligning indicators, data collection methods, and reporting processes with program goals to ensure reliable and actionable insights. The content covers critical components of an effective M&E plan, including defining SMART indicators, setting baselines and targets, and establishing data acquisition and reporting methods. Humanitarian professionals will benefit from its practical focus on data quality, emphasizing validity, reliability, and timeliness as essential criteria for ensuring the credibility of findings. Additionally, the guide explores various data collection techniques, from surveys to focus group discussions, offering strategies to select the most appropriate methods for different contexts. This document serves as a comprehensive resource for M&E practitioners committed to optimizing program performance. By mastering the tools and principles presented, professionals can design M&E systems that drive evidence-based decision-making, enhance program accountability, and foster meaningful impact in humanitarian interventions.

  • View profile for Roseline Adewuyi, Ph.D.

    ✯ PhD in French Literature and Gender Studies, Purdue University, USA ✯ Gender & Development Advocate ✯ Communication Specialist ✯ Speaker ✯ Writer ✯

    23,459 followers

    How to Quantify Your Impact Like a Professional A lot of people are doing incredible work but struggle to communicate it in a way that is measurable, credible, and compelling. If you want to stand out in applications, interviews, or leadership opportunities, you must learn to translate your work into data. Here is a simple guide to help you start. 1.    Weak vs. Strong Impact Statements Weak: “I worked on a literacy program that reached some students in different communities. We partnered with a few groups and distributed some books. Reading time improved.” Strong: “In 12 months, I expanded a literacy program to reach 1,450 students across 22 communities, mobilized 102 volunteers, built 12 partnerships, delivered 4,200 books, and increased reading time by 68%.” This is the standard you should aim for. Numbers matter. 2. What You Should Measure Start tracking clear, countable indicators like: • People reached • Workshops/events held • Partnerships built • Funds raised • Resources distributed • Social media reach • Volunteer hours • Pre/post assessment results These are your core impact metrics. 3.    Turn Activities Into Data Vague: “Trained many students.” Specific: “Trained 350 students across 12 schools in Jos, Nigeria, in 3 months.” Specificity turns your work into evidence. 4.     Use Before & After Metrics Show growth by comparing where you started vs. where you are now: • Volunteers: 15 → 52 • Partnerships: 0 → 7 • Retention rates: +47% Before-and-after numbers make progress visible and undeniable. 5.    Use Percentages + Timeframes Percentages show the scale of change: • Engagement increased 47% • Comprehension improved 40%+ • Attendance rose 50%+ Timeframes add clarity: • in 6 months • in one academic year • Jan–Sept 2023 Always anchor numbers in time. 6.     Visualize Your Impact When writing reports or applications, use visuals such as: 📊 Bar charts 📈 Growth lines 🔢 Big bold numbers 📍 Maps of communities served Visuals help your data tell a memorable story. 7.     Build a “Metrics Bank” Keep one simple file that tracks: • Outreach numbers • Volunteers mobilized • Partnerships • Media features • Annual summaries Then reuse these metrics in applications: • “Reached 6,200+ students.” • “Mobilized 52 volunteers contributing 5,000 hours.” • “Raised ₦4.2M in community donations.” Your work matters. Your numbers prove it.

  • View profile for Akanksha Singh

    Analyzing and Visualizing Data for Informed Business Decisions & Enhanced Product Innovation | Tableau | SQL | Python | Power BI | Product | Airflow

    8,823 followers

    Day 2: What I’d Do as an Analyst – Measuring Amazon’s Loyalty Program Success Hi Everyone! Welcome to Day 2 of my 7-day series, “What I’d Do as an Analyst.” Today, I’m tackling a scenario where Amazon launches a new loyalty program for frequent shoppers. The Scenario Amazon wants to reward frequent shoppers with a loyalty program. The question is: What KPIs would I track to measure success, and how would I evaluate its impact on revenue? Step 1: Understanding the Problem Loyalty programs aim to drive retention and revenue. Key questions include: - Are shoppers buying more often or spending more? - What is the short-term vs. long-term value of the program? Step 2: Key KPIs to Track To measure the success of the loyalty program, I’d focus on: 1️⃣ Customer Retention Metrics - Repeat Purchase Rate: Are loyalty program members shopping more frequently? - Churn Rate: Has the program reduced the percentage of customers leaving Amazon? 2️⃣ Engagement Metrics - Enrollment Rate: How many eligible customers are signing up for the program? - Program Engagement: Are members actively redeeming rewards or benefits? 3️⃣ Revenue Impact - Average Order Value (AOV): Are loyalty members spending more per transaction? - Incremental Revenue: How much additional revenue is directly tied to loyalty members? 4️⃣ Customer Lifetime Value (CLV) - Are loyalty members showing a higher CLV compared to non-members over time? 5️⃣ Program Costs - Are the costs of running the program (e.g., discounts, rewards) sustainable relative to the revenue it generates? Step 3: The Solution Approach Here’s how I’d evaluate the program’s impact: 1️⃣ Segment and Compare - Create separate customer segments (e.g., loyalty members vs. non-members) and compare key metrics like AOV, repeat purchase rate, and CLV. - Use cohort analysis to track how customer behavior changes over time. 2️⃣ Track Behavior Changes - Monitor if loyalty members are increasing their purchase frequency, spending more, or trying new product categories. - Analyze redemption behavior—are members redeeming rewards in ways that drive repeat purchases? 3️⃣ Run Controlled Experiments - Implement A/B testing by offering the program to a test group and comparing their behavior to a control group. - Evaluate the program’s incremental impact on revenue while controlling for external factors like seasonality. 4️⃣ Evaluate Long-Term Sustainability - Use predictive modeling to estimate the program’s long-term impact on revenue, factoring in retention improvements and increased CLV. - Monitor program costs to ensure a healthy ROI. Step 4: Expected Outcome - Retain more customers and increase their lifetime value. - Drive higher revenue through increased purchase frequency and basket sizes. - Ensure the loyalty program remains profitable and scalable over time. What KPIs would you prioritize to measure success? Share your thoughts below! 👇 #DataAnalytics #KPIs #BusinessGrowth #EcommerceInsights

Explore categories