The recent wave of layoffs at Infosys has sparked a heated debate about the quality of programming education in India. Fresh graduates, after multiple rounds of training and assessments, were still unable to meet industry standards, leading to their termination. This incident shines a harsh light on a long-standing issue: Are Indian computer engineering graduates truly equipped with the programming skills the industry demands?
The unsettling reality is that a large fraction of engineering graduates struggle with basic coding. This isn’t due to a lack of talent but rather a flawed education system that prioritizes rote learning over real-world problem-solving. With outdated curricula and minimal hands-on practice, students often memorize predefined lab exercises rather than developing an intuitive understanding of programming concepts.
Beyond the Classroom: The Glaring Gap in Programming Education
Unlike mathematics, where mastery comes through continuous practice, programming requires an immersive learning environment that encourages students to think logically and solve problems independently. However, most engineering institutions fail to provide this. The conventional lab setup offers students fewer than 20 programming exercises per semester, and even these are often repeated in final exams. This fosters a culture of memorization rather than comprehension.
To address this gap, some private universities have introduced cloud-based coding platforms. While these tools offer a structured approach to coding practice, they fall short in ensuring genuine learning. The rise of Generative AI (GenAI) tools further complicates the issue. Students can now use AI to generate code effortlessly, bypassing actual learning and making it increasingly difficult to assess their real skill levels through traditional evaluation methods.
A Hybrid Assessment Framework: The Need of the Hour
To bridge this growing disconnect, Higher Education Institutions (HEIs) must adopt a hybrid evaluation approach that blends automated testing with human-driven code walkthroughs. While automated coding platforms can assess correctness and efficiency, they cannot verify whether a student truly understands their own code.
How can this be fixed?
- Emphasis on Code Walkthroughs
Instead of relying solely on traditional viva-voce sessions, students should be required to walk examiners through their code. This method allows evaluators to ask dynamic, implementation-specific questions:- Why did you choose this loop structure?
- How are edge cases handled?
- What made you select these variable names?
A student who has genuinely written the code can answer these with ease, while those who have relied on AI tools or copied solutions will struggle.
- Balanced Assessment Model (70-30 Split)
Institutions should implement a 70-30 assessment model:- 70% Automated Testing: Timed coding assessments with diverse test cases conducted on secure cloud-based platforms.
- 30% Human Evaluation: Faculty-led rolling viva sessions where students explain their code in real-time, ensuring authentic learning.
- Industry-Aligned Evaluation
This approach mirrors hiring practices in IT companies, where candidates are frequently asked to explain their code logic during interviews. By incorporating similar assessments in academia, graduates will be better prepared for real-world technical challenges.
Ensuring Effective Implementation
For this model to work, institutions must invest in the right infrastructure:
✅ Lower Student-Faculty Ratio: Ideally 60:1 or less, allowing for individualized assessments.
✅ Frequent Viva Sessions: Short 15-minute evaluations spread across the semester for a thorough skill check.
✅ External Evaluators: Independent assessment panels to ensure fairness and maintain high standards.
✅ Digital Integration: Secure coding platforms linked to Learning Management Systems (LMS) to record assessments and maintain transparency.
Fixing the Root Cause – A Call to Action
The future of programming education hinges on striking a balance between automated assessments and human verification. Cloud-based coding platforms are excellent tools, but without rigorous code explanation sessions, they risk being reduced to mere practice arenas. Authentic learning happens when students not only write code but can also explain and justify their choices.
By implementing this hybrid assessment model, institutions can ensure that graduates enter the workforce as competent programmers, not just degree holders. A well-structured evaluation system will not only reduce the risk of mass layoffs due to incompetence but also solidify India’s standing as a global tech powerhouse.
It’s time for educational institutions to wake up, adapt, and equip students with the skills they actually need—before the industry makes that decision for them.