Industry Reports

What Is ChatGPT? And What Does It Mean for Technical Hiring?

Written By HackerRank | January 20, 2023

Since its public debut in November, ChatGPT has taken the world by storm. In only five days, it surged to one million users. In just over a month, the valuation of the company behind it, OpenAI, grew to $29 billion.

Across sectors, there’s a growing chorus of questions about the implications of large language models (LLMs) like ChatGPT. Will these AI-enabled tools change education and make essay writing obsolete? Can they generate creative enough ideas to power mainstream ad campaigns? Will tools like ChatGPT provide a viable alternative to traditional search engines?

We’re asking some equally big questions ourselves: How well can ChatGPT actually code? And what impact will LLMs have on the broader world of computer programming? 

AI-powered innovation like ChatGPT is poised to fundamentally change the relationship between developers and coding, including how employers assess technical skills and hire developers. With that in mind, we dove deep into the details of ChatGPT, its impact on skill assessments, and what its development means for the future of technical hiring.

Key Takeaways:

  • The coding potential of LLMs has reinforced the need for strategies and tools for upholding the integrity of coding assessments.
  • Strong proctoring tools and plagiarism detection systems have become essential, and can help protect even solvable questions. 
  • Employers should avoid multiple choice questions and problems that have answers so short that a plagiarism detection system can’t detect when a candidate has received help from a tool like ChatGPT.
  • Continued growth of artificial intelligence will redefine the real-world application of coding skills and, in the process, change technical hiring as we know it.
  • HackerRank is embracing AI and will pursue innovative ideas that imagine a future of programming in an AI-driven world.

What is ChatGPT?

On a basic level, ChatGPT is an example of a large language model. A large language model is a computer system trained on huge data sets and built with a high number of parameters. This extends the system’s text capabilities beyond traditional AI and enables it to respond to prompts with minimal or no training data.

The goal of ChatGPT’s developer, OpenAI, was to create a machine learning system which can carry a natural conversation. In practice, ChatGPT functions like a search engine or content creation system, synthesizing billions of data points into custom responses. 

Developing a Smart Conversational Agent

The development of ChatGPT incorporated two innovative approaches: 

  1. ChatGPT is powered by the well-known ML model GPT3.5. The model is trained to complete the next few words of an incomplete sentence. The main idea behind this model is that, after training against billions of data points, the model starts to understand enough about the human world to complete sentences.
  2. ChatGPT uses a human-in-the-loop system to continuously improve and answer questions in a more human-like fashion. OpenAI hired thousands of contractors to write human-like responses to challenging prompts as a way to continuously improve the model. Training the model to answer difficult questions improved ChatGPT’s responses at a remarkable rate.

Now that the training process is complete, users can run ChatGPT on accessible devices. This trait makes it superior to other models like AlphaCode, which are thought to be prohibitively expensive to run even after training is complete.

What Are the Strengths of ChatGPT?

Using the process above, OpenAI trained ChatGPT on almost all human knowledge. This enables ChatGPT to:

  • Create never before seen sentences and code. Because it’s seen billions of sentences and lines of code, ChatGPT can synthesize the information it has seen and form answers to questions that can be perceived as novel. However, there’s no guarantee that this code will be correct or optimal.
  • Combine ideas that it has seen separately but never in combination. For example, ChatGPT can write an answer to a coding question in the writing style of a specific author. 
  • Exhibit a breadth of information. ChatGPT is trained on so much data that it has seen examples of most common situations and their potential variations. This enables it to give specific answers to niche questions or generalized answers based on more specific data.

What Are the Limitations of ChatGPT?

While ChatGPT outputs human-like sentences, and it’s easy to mistake its output as being powered by true intelligence, ChatGPT does have shortcomings. 

In describing the tool’s limitations, OpenAI explained that ChatGPT may occasionally “generate incorrect information” or “produce harmful instructions or biased content.” Industry publications have described ChatGPT as confidently wrong, exhibiting a tone of confidence in its answers, regardless of whether those answers are accurate. 

ChatGPT lacks the ability to fact-check itself or conduct logical reasoning. It often incorrectly answers questions and can be tricked relatively easily. Technologists have also noted its propensity to “hallucinate,” a term used to describe when an AI gives a confident response that is not justified by training data.

How ChatGPT Impacts Assessment Content

As a coding tool, ChatGPT excels at certain types of technical problems—but also has its limitations. A strong content strategy will be necessary to test your current coding challenges and prioritize the questions, and question types, that are less susceptible to AI coding support. 

ChatGPT has probably seen almost all known algorithms. But ChatGPT isn’t just able to answer these algorithm questions correctly. It’s also able to write new implementations of those algorithms, answer freeform questions, and explain its work.

As a result, ChatGPT can answer the following question types with reasonable accuracy:

  • Well-known algorithms: It’s safe to assume that ChatGPT has seen and is able to answer all publicly available coding problems on platforms such as LeetCode and StackOverflow. If the algorithm appears in online forums or practice websites, ChatGPT will likely answer it correctly.
  • Minor variations of problems. ChatGPT does well on variations that tend to add to the solution rather than change it in any substantial way. The system can, for example, easily reverse the order of an array of numbers.
  • Multiple choice questions. When presented with a question and multiple potential answers, ChatGPT can usually identify the correct answer.

For hiring teams who administer coding challenges, that doesn’t mean you should necessarily avoid all questions that ChatGPT can solve. With the right protections in place, even questions solvable by AI can still be reliable. The key is to avoid questions that have answers so short that a plagiarism detection system can’t detect when a candidate has used a tool like ChatGPT. Even so, we are evolving our library with new types of content specifically designed with AI code assistance tools in mind.

Taking all of this into account, there are some actions you can take today to limit your hiring content’s exposure to the risk of plagiarism, including: 

  • Avoid easily solved multiple choice questions
  • Avoid simple prompts to solve for common or widely available algorithm variants
  • Remove questions that require only a few lines of code to solve
  • Use proctoring tools and plagiarism detection systems
  • Combine coding tests with virtual interviewing tools to add empirical data to the hiring process

Ensuring Assessment and Hiring Integrity

In a world where humans and machines alike can write code, the ability to detect the use of AI-coding tools is invaluable. As such, employers increasingly turn to strategies and technologies that enable them to uphold the integrity of their technical assessments.

Assessment integrity has two core pillars: proctoring tools and plagiarism detection.

Proctoring Tools

One important component of ensuring assessment integrity is to build systems that provide the right proctoring capabilities. 

Proctoring is the process of capturing behavioral signals from a coding test, and its purpose is twofold. First, proctoring tools record data points that support plagiarism detection. Second, proctoring tools also act as a deterrent against plagiarism, as candidates who know that proctoring is in place are less likely to engage in such activity.

The key behavioral signals that proctoring tools often record include:

  • Tab proctoring. Monitors if the candidate switches between tabs.
  • Copy-paste tracking. Tracks if a candidate pastes copied code in the assessment.
  • Image proctoring. Captures and records periodic snapshots of the candidate.
  • Image Analysis. Analyzes webcam photos for suspicious activity.

Plagiarism Detection

In addition to proctoring tools, the integrity of an assessment also relies on plagiarism detection. In other words, the ability to flag when a candidate likely received outside help. 

The current industry standard for plagiarism detection relies heavily on MOSS code similarity. Not only can this approach often lead to higher false positives rates, but it also unreliably detects plagiarism originating from conversational agents like ChatGPT. That’s because ChatGPT can produce somewhat original code, which can circumvent similarity tests.

While the launch of ChatGPT caught many by surprise, the rise of LLMs has been a popular topic in technical communities for some time. Anticipating the need for new tools to ensure assessment integrity, HackerRank developed a state-of-the-art plagiarism detection system that combines proctoring signals and code analysis.

Using machine learning to characterize certain coding patterns, our algorithm checks for plagiarism based on a number of signals. Our model also uses self-learning to analyze past data points and continuously improve its confidence levels.

The result is a brand new ML-based detection system that is three times more accurate at detecting plagiarism than traditional code similarity approaches—and can detect the use of external tools such as ChatGPT.

Embracing Artificial Intelligence

As exciting as the launch of ChatGPT has been, LLMs with its capabilities are only the beginning. While it’s hard to predict the future, one thing is certain: AI technology is in a nascent state and will continue to grow at a rapid rate.

In the short term, the key to evolving your hiring strategy hinges on a renewed focus on content innovation and assessment integrity. By combining a strong question strategy with advanced proctoring and plagiarism detection, hiring teams can protect their assessment integrity and hire great candidates.

In the long term, we anticipate that artificial intelligence will redefine developer skills and, in the process, change technical hiring as we know it. 

At HackerRank, our mission is to accelerate the world’s innovation. As such, we welcome this new wave of technological transformation and will pursue innovative ideas that imagine a future of programming in an AI-driven world. 

Frequently Asked Questions

Can Your Plagiarism Detection System Detect Code From ChatGPT?

Yes. Our AI-enabled plagiarism detection system feeds several proctoring and user-generated signals into an advanced machine-learning algorithm to flag suspicious behavior during an assessment. By understanding code iterations made by the candidate, the model can detect if they had external help, including from ChatGPT.

When Will the Plagiarism Detection System Be Available?

The new plagiarism system is currently in limited availability, with plans for general availability in early 2023. If you would like to participate in our limited availability release, please let your HackerRank customer success manager know and we would be happy to enable you.

Can You Validate if My Coding Questions Are Easily Solved by ChatGPT and Provide Replacement Options?

If you would like assistance in verifying how ChatGPT responds to your custom coding questions, we can run a report and provide content recommendations based on the results. Please contact our HackerRank Support Team, who would be happy to help. 

Should I Avoid All Questions That ChatGPT Can Solve? 

No. HackerRank’s proctoring tools and plagiarism detection system can protect even solvable questions. Instead, avoid multiple choice questions and problems with very easy or short answers.

I Still Have Questions About ChatGPT. Who Should I Contact?

If you’re a customer looking for support on plagiarism and its impact on your business, you can contact your customer success manager or our team at support@hackerrank.com.

How Does a HackerRank DMCA Takedown Work?