Ketman's Guide to Identifying a Suspicious Github Account Associated with DPRK

Ketman's Guide to Identifying a Suspicious Github Account Associated with DPRK

Investigation

The investigation into the threat actor associated with DPRK activities has revealed several interesting insights into how to track this actor’s activity based on their own presentation on GitHub. This analysis allowed us to identify their context through their followers and following patterns within two to three degrees of separation. Additionally, it was found that the use of certain images can also be a key aspect when analyzing these networks related to this actor

How to Identify a Suspicious GitHub Account Associated with DPRK Threat Actors

We can also identify a GitHub account based on its actual context - such as country, connections, social networks, and social activity - and analyze its relationships within that context, including follower and following patterns.

When analyzing an account to determine if it might be related to a group threat actor like Lazarus, it’s essential to consider the following aspects:

  • Creation Date: Many accounts were created between May and the end of 2023, exhibiting sporadic or unusual repository activity. However, there have also been instances where stolen accounts are used or purchased from other actors operating on GitHub who offer these services.

  • Follow/Followers: These accounts are often interconnected, frequently following “node” accounts that serve as hubs within the network. Examine the follower and following patterns up to 2 or 3 degrees to identify any anomalies within their immediate context.

  • Suspicious Repository Activity: Common patterns include excessive forking, starring empty profiles, hosting identical repositories, and sharing similar “projects.”

  • Many profiles tend to feature generic descriptions, such as “Full Stack Developer” with “+5 to +8 years of experience.” Additionally, it’s common to see broken social media links or profiles that overly promote their social media presence, which seems unusual.

  • Personality: Most of these profiles lack a distinct identity and are not highly personalized. Many descriptions and organizational structures on their GitHub profiles appear identical or generic

  • Skills: Frequently listed skills include “Full Stack Blockchain Developer,” “Full-Stack Software Engineer,” “AI/ML Engineer,” and “Senior AI & Full Stack Developer.”

  • Social Networks: Indicators of inactivity include LinkedIn profiles with minimal engagement, fake GitHub profiles linked to legitimate accounts, suspicious Instagram/Facebook accounts, and a general lack of recent social activity.

  • Context: Red flags include broken links, irregular GitHub statistics, AI-generated profile images, profiles based in Latin America with seemingly mismatched Asian appearances, and accounts lacking historical data.

  • Logical Pattern: Activities unrelated to the account’s stated purpose, mismatched skill sets, and inconsistencies in knowledge domains.

  • Internal Association: Connections to specific organizations, interest in certain groups, and a pattern of forking projects tied to particular organizations-companies.

  • External Association: The presence of these accounts on other social networks or freelance platforms like Upwork, and connections to associated accounts that engage in freelance work.

This approach could also help in identifying suspicious accounts by highlighting key characteristics and connections within their network

What do accounts linked to this suspicious campaign look like?

Based on our investigation, we have identified certain patterns in the creation of these accounts, such as creation dates, skills, similar images in profiles, comparable bios, and analogous GitHub handles, among other aspects.

Regarding their self-identification through images, we have found and classified their accounts based on how they present themselves.

Some aspects to consider in this image classification:

  • It is clear that not all accounts using some of these images are connected to the campaign of suspicious GitHub accounts associated with DPRK threat actors.

  • “Superstar” is a name they consistently use in their campaign (GitHub handle, profile images, text found) and this has been repeatedly observed among them.

  • We found that these profiles, which have a substantial following, often tend to identify themselves with images featuring the number one, frequently complemented by gold, stars and red colors.

  • There are also profile images linked to anime, movies, and other themes, where these accounts are interconnected. Furthermore, these profiles are commonly found among the followers of these accounts.

  • While there is a diverse range of images, many accounts follow the pattern of presenting themselves as “developers” while aligning with a specific image.

In our classification of images, it seems to indicate the existence of the some kind of categories or ranks among their accounts. Additionally, there are groups of accounts with specific images that appear to serve certain functions

Most of the following GitHub accounts list their skills as ‘Full Stack Developer’ or ‘Blockchain Engineer’, ‘AI | Blockchain | Full Stack Engineer | DevOps’ among the most popular titles

# Profile Image Patterns and Behavioral Traits of Identified Accounts

Among the identified accounts, we’ve observed six distinct types of profile images frequently used for self-identification. These images and identities often correlate with specific account behaviors. For instance, some accounts:

  • exclusively follow female profiles
  • while others boast over 50,000 followers
  • certain accounts actively monitor their targets
  • many appear to be interconnected through shared followers or those they follow

# #1 Type of accounts: Star - “SuperStar”

Among the most significant accounts, it has been observed that those featuring images with the number one, golden spikes, and stars appear to function as nodes or clusters. Considering the context of the actor, it is possible that these accounts, which have many followers, are used to monitor the activities of “lower-tier” entities. Additionally, since these accounts may serve as intelligence units, they could be essential for coordinating attacks and assessing the effectiveness of the units involved in the campaigns

  • Role as Nodes or Clusters: The repeated use of symbols representing leadership or dominance (like gold stars and medals) suggests that these accounts may act as central nodes within a network. They could play a strategic role by gathering followers and acting as influential points within broader networks. These central accounts could be used to coordinate or observe the actions of associated or subordinate accounts (“lower-tier” entities) by directing their activity and gathering information.

  • Usernames and Roles: Usernames continue to focus on common developer keywords like “Full Stack Developer,” “AI Engineer,” “Super Dev,” and various references to technology stacks or roles.

  • Campaing association: The term “SuperStar” has been coined for a campaign targeting GitHub accounts linked to fake developers, as many of these accounts—often using specific images—are associated with activities of the North Korean APT group, Lazarus.

  • It has been observed that these suspicious GitHub accounts are followed by or follow many other accounts that form a network likely associated with the activities of the North Korean APT group, Lazarus. If not directly connected, these “SuperStar” accounts may appear among the followers or followed accounts within two or three degrees of separation from the suspicious account.

  • Additionally, in the following repository, a screenshot of a group member possibly reveals the use of “SuperStar” in the computer’s name, further underscoring the relevance of this term: https://github.com/orgs/Finalgoal231/discussions/69

image

# #2 Type of accounts: AI-generated images or avatars

Many profiles use AI-generated images or avatars, which may be intentionally chosen to obscure the user’s identity or add visual appeal.

  • Usernames and Descriptions: The accounts often have generic usernames or employ developer-centric keywords like “Full Stack Developer,” “Senior Dev,” or domain-specific tags like “Blockchain.” Their bios typically feature broad and appealing job descriptions that seem crafted to attract followers or create an impression of expertise.
  • Activity: These profiles are more likely to interact with repositories, leave comments, contribute code, and follow trending repositories or popular tech stacks like Blockchain, AI, or Full-Stack Development, reflecting an effort to engage with and be visible within key communities

image

# #3 Type of accounts: Minion Avatars:

Multiple instances of these profiles have been identified. Some of these GitHub accounts are linked to LinkedIn profiles (verified connections) but exhibit no typical social activity on that platform. This lack of interaction raises suspicion, especially given their claimed experience levels. Generally, accounts with such credentials would have higher activity levels. Monitoring these types of accounts through associations can indicate suspicious behavior, as many of these profiles lack repositories or noteworthy content that would typically warrant someone choosing to “follow” them on GitHub.

  • Use of Minion Avatars: All the accounts display different variations of Minion characters. In suspicious activity contexts, it gives the impression of being part of a coordinated network.

  • Generic Usernames and Roles: Usernames include elements like “Dev,” “Engineer,” or generic names paired with Minion references. Some descriptions reference broad technical roles like “Full Stack Developer” or “Senior Engineer,” with vague but appealing descriptions like “10+ years of experience,” targeting developer communities without revealing much specific information

  • These accounts could be part of a coordinated network using Minion avatars.. This tactic might serve to obscure their true purpose, promote a sense of community, and establish a base of followers. When combined with high follower counts, vague but enticing job descriptions, and developer-centric keywords, these accounts could be involved in deceptive activities like amplifying repositories, gathering intelligence, or even executing coordinated influence operations within the GitHub community.

minion fools

#4 Cartoon-style avatars:

These GitHub profiles exhibit some characteristics that might indicate suspicious or inauthentic activity

  • Unusual Consistency in Avatars: Many of these accounts have similar bunny-themed avatars, which could indicate they were generated or chosen in bulk for visual uniformity, possibly as part of a bot network.

  • Inconsistent Information: Some of these accounts list generic job titles (e.g., “Full Stack Developer”) and brief descriptions without specifics. Real developer profiles on GitHub usually have personalized descriptions or links to real projects and repositories.

The use of cartoon bunny avatars, especially when combined with low activity and generic profile information, could suggest that these accounts are part of a coordinated network, likely created for non-legitimate purposes such as boosting followers or creating the appearance of activity around specific users or projects.

bunny3

#5 Profiles with Anime Avatars:

These GitHub profiles also show signs of potentially suspicious or coordinated behavior, with similarities to the previous batch of images described

  • An interesting aspect of these profiles with “anime avatars” is their tendency to engage actively on GitHub, not only in communities but also in social interactions. It appears that profiles using these types of images demonstrate a certain level of autonomy and expertise, as their contributions to other repositories often reveal advanced knowledge in areas like Blockchain.

  • It’s important to note that using anime avatars as profile pictures is common across the internet. However, when multiple profiles share similarities in skills, images, profile creation dates, followed accounts, and other characteristics, they deviate from typical patterns and behaviors. These overlapping factors raise questions about the authenticity and intentions behind these accounts.

  • Anime-style Avatars: A high concentration of anime-themed profile pictures suggests a possible pattern. While anime avatars are common among some users, the similar styles across these profiles can indicate they were chosen from a shared source, which is often seen in bot networks.

  • Generic Profile Descriptions: Many accounts list vague titles like “Full Stack Developer” or “Blockchain Developer” without showcasing projects, repositories, or specific achievements. Authentic GitHub profiles usually highlight contributions or link to actual code repositories.

  • Follower-to-Following Ratios: Some profiles have unusually high follower counts despite minimal activity. This could indicate artificial boosting or reciprocal following within a network to create the appearance of credibility.

anime DPRK invest last one

#6 Real Identities / fake human profiles:

The GitHub accounts shown here also exhibit some indicators that may suggest they are not authentic. Here’s an analysis of suspicious characteristics:

  • Professional Headshot-style Avatars: Unlike the typical developer profile images, these accounts use professional headshots or casual photos that appear unrelated to the GitHub platform. While this alone isn’t a red flag, a pattern of using similar real-looking images, especially if sourced from unrelated sources or stock photos, could point to the use of fake or “borrowed” identities.

  • Low Activity and High Follower-to-Following Ratios: Some accounts have follower and following numbers that don’t align with actual contributions or repositories, suggesting they may be part of a network or were created to follow or boost other accounts artificially.

  • Location and Background Inconsistencies: The profiles show a mix of diverse locations and job titles without verifiable links to actual projects or companies. This spread of locations and titles could indicate an attempt to appear internationally diverse or legitimate, while actual activity may be minimal.

  • External Links and Contact Information: Some profiles include contact information or links to freelancing platforms, which could be legitimate, but in some cases, it’s used to create a perception of authenticity. If these links lead to minimal or duplicate portfolios, it could be another indication of a fake network.

  • Location-Based Trust Signals: Recently it has been observed that many of these accounts are now adapting by specifying locations, sometimes even providing exact details about small and mid-sized cities where they are supposedly based. This seems to be an attempt to create a greater sense of trust and authenticity.

stolen people DPRK

#7 Suspended accounts related to this network:

These accounts are currently suspended and are among the followers or following lists of accounts linked to activity associated with North Korean APT Lazarus Group

  • An important aspect to highlight is that many of these suspended accounts fit into the segmentation we discussed regarding the identifiers they use for themselves.

  • The widespread suspension could indicate suspicious or coordinated behavior, possibly part of Lazarus’s tactics, which include creating multiple accounts to carry out social engineering attacks, spread malicious code, or manipulate repository metrics. This type of activity aligns with known strategies by North Korean threat actors who use compromised or fake developer accounts on platforms like GitHub to conduct cyber operations.

banned people DPRK

Highlights

Combining innovative threat-hunting techniques, such as image analysis and human intelligence (HUMINT), can enhance the ability to track threat actors more effectively.

  • Image analysis, like tracking specific visual elements such as icons or images, allows researchers to identify and link threat actors across campaigns, even when they attempt to evade detection. Similarly, HUMINT focuses on gathering intelligence from social engineering, insider interactions, and open-source human behavior, adding context that purely technical indicators often miss.

  • Together, these methods show the potential of combining technical and behavioral insights to detect and anticipate threats earlier. However, they are often underutilized, as traditional threat hunting tends to focus mainly on network and endpoint indicators. Emphasizing HUMINT and image analysis could improve early threat detection and adversary profiling.

  • Analyzing alternative methods, such as images, photos, videos, and conversations in the early stages of the kill-chain, is an underutilized approach in threat hunting. By considering the attacker’s context, we can gain valuable insights into their tactics across various social networks. Individuals from countries with strict internet control and isolation often exhibit distinctive cultural and online behavior patterns, making these aspects both valuable and intriguing to study.

Cyber intelligence requires a holistic approach that includes behavioral analysis, threat actor profiling, and contextual intelligence; retrospective analysis reduces us to evidence collectors rather than proactive defenders.

True intelligence aims to anticipate the adversary’s actions, maintaining a proactive edge.

# How gh-analyzer Provides Context and Identifies Potential Account Behavior:

gh-analyze is designed to quickly build a dataset with accessible schema of github profiles you are targeting for the investigation. It is still work in progress.s) https://github.com/shortdoom/gh-fake-analyzer

Download, analyze and monitor profile data of any GitHub user or organization. This reconnaissance tool is designed for the OSINT/security community, enabling the inspection of potential bot, scammer, blackhat, or fake employee accounts for dark patterns (see, Malicious GitHub Accounts)

# Analyzing Lazarus-Related accounts with gh-fake-analyzer:

Leveraging the gh-fake-analyzer tool can significantly enhance your intelligence operations by providing advanced capabilities to detect and analyze fraudulent or misleading GitHub profiles. This tool utilizes various heuristics to identify indicators of deception, such as unusual activity patterns, inconsistencies in profile data, and irregular contributions.

By integrating gh-fake-analyzer into your threat-hunting workflow, you can effectively filter out potential threats posed by malicious actors who may disguise themselves as legitimate contributors. This enables you to:

  • Enhance Profile Verification: Quickly assess the authenticity of GitHub users, ensuring that you engage only with trustworthy contributors.

  • Identify Potential Threats: Detect accounts that exhibit suspicious behavior, such as fake contributions or misleading project involvement, which may indicate a broader threat.

  • Improve Incident Response: By identifying fraudulent profiles early, you can prevent potential security breaches and reduce the impact of malicious activities.

Incorporating gh-fake-analyzer into your intelligence toolkit could make informed decisions and maintain a proactive stance against evolving threats in the digital landscape.