Mechanistic Interpretability in Action: Understanding Induction Heads and QK Circuits in Transformers

Ayyüce Kızrak, Ph.D.
20 min readSep 28, 2024

--

This project, created for the AI Alignment Course — AI Safety Fundamentals powered by BlueDot Impact, leverages a range of advanced resources to explore key concepts in mechanistic interpretability in transformers.

Acknowledgment —I would like to express my gratitude to the AI Safety Fundamental team, the facilitators, and all participants in the cohorts that I had the opportunity to contribute to developing new ideas in our discussions. I am pleased to be a part of this team.

Cover Image Source: Google DeepMindUnsplash

To explore the practical implementation of the topic discussed in this blog post, check out my GitHub repository.👇

Introduction — Mechanistic Interpretability

In artificial intelligence (AI), mechanistic interpretability focuses on studying and understanding how artificial neural network (ANN) models, deep learning models, work at the level of individual components such as neurons, circuits, and weights. AI models are often described as “black boxes” due to their uncertainty. However, AI alignment’s mechanistic interpretability approaches focus on the inner workings of ANNs to provide a more evident inference about how they make decisions.

Image resourse- Zoom-In: Introduction to Circuit

What is interesting about this approach is that breakthroughs inspired it in fields such as cellular biology, where visualizing the microscopic internal structure of organisms led to significant advances. Similarly, deep learning researchers now focus on understanding NNs by examining their internal mechanisms. The main subheadings of mechanistic interpretability are:

  1. Neuron-Level Analysis: This involves examining individual neurons in an NN to understand their specific roles and behaviors. Each neuron can detect specific features or patterns in the data. For example, it aims to identify neurons that detect particular concepts, such as edges in images, specific words in text, or complex abstractions, such as emotion or object presence.
  2. Feature Detection: This means understanding how individual neurons or groups of neurons represent specific features, such as curves or edges, or more abstract concepts, such as object parts in image models or syntactic elements in language models. For example, in vision models, lower layers can detect simple features, such as edges, while upper layers can detect complex features, such as faces or objects.
  3. Circuit Analysis: This involves mapping how groups of neurons interact through connections, or weights, to perform higher-level functions. Circuits can be viewed as subgraphs that capture specific computational patterns within the model. For example, we might try to understand how neurons work together to detect a dog’s head in an image or how they recognize repeated patterns in text sequences.
  4. QK Circuit Analysis: This focuses on understanding how the Query and Key (QK) matrices in attention mechanisms allocate focus among different tokens. This analysis helps reveal how models prioritize information in sequences. For example, by analyzing how QK circuits in transformer models handle long-term dependencies in text, they determine which words in a sentence should be emphasized based on the previous context.
  5. Induction Head Detection: Induction heads are specific attention heads in transformers that help models remember and repeat sequences during in-context learning. Analyzing these heads can provide insights into how models handle patterns in data. For example, it involves detecting and analyzing heads that learn to predict the next word in a repeated sequence, such as a list of names or a sequence of events.
  6. Layer-Based Analysis: This focuses on understanding how information transforms and flows through the different layers of a model. Each layer can perform a different type of computation or abstraction that reveals the hierarchical structure of the model’s learning. For example, lower layers in language models may capture basic syntactic structures, while higher layers may capture semantic meaning or contextual nuances.
  7. OV Circuit Analysis: This involves examining how the Output (O) and Value (V) matrices in the attention layers contribute to the model’s predictions. It focuses on explaining how information moves and transforms within the model, such as observing how values ​​are reflected and summed to create the final representation of a token in a sequence.
  8. Polysemantic Neurons: These neurons respond to multiple unrelated features or concepts. These neurons can be challenging to understand because they often represent more complex or intertwined concepts. For example, a neuron in a language model fires for “cat” and “car” because it is related to perceiving something they both have in common, such as the letter “c.”
  9. Activation and Weight Analysis: This refers to understanding the function of specific components or layers in a network by analyzing the activation of neurons and the weights that connect them. For example, it involves analyzing how changing a weight affects the output of a neuron or how activation patterns change in response to different inputs.
  10. Causal Interventions: This approach is used to see how changes to model components, such as ablations (removing parts of the model) or causal cleaning (changing inputs), affect the model’s behavior. For example, consider removing the attention cap to see how it affects the model’s performance on a specific task, such as sentiment analysis.

Collectively, these subtopics aim to provide a deeper, mechanistic understanding of how complex models work. This will enable researchers to demystify NNs’ behavior and improve their transparency, reliability, and safety, paving the way for more advanced and dependable AI systems.

In this blog post, I focus on 2 of these 10 highlighted subtopics and address the following fundamental question: Can and how can mechanistic interpretability techniques, such as Induction Head Detection and QK Circuit Analysis, help us better understand and improve the behavior of transformer-based models, such as BERT and GPT-2? The potential impact of this research is significant, inspiring us to delve deeper into the inner workings of these complex models and motivating us to contribute to the advancement of AI.

This project investigates how these techniques show the inner workings of transformer models, often perceived as ‘black boxes.’ Understanding these internal mechanisms is not just crucial, but it is also fascinating. It enhances AI transparency, especially in advanced AI systems where they may hold pivotal decision-making roles. Enhancing transparency can mitigate the risks associated with non-align and undesirable model behavior. Insights gained from understanding induction heads and QK circuits can contribute to building safer, more predictable AI systems by better understanding how models maintain context and allocate attention across different inputs. Let’s first try to understand these two concepts regarding mechanical interpretability and realize the urgency and significance of this understanding.

Induction Head Detection

Induction heads are not just another component in transformer models but a crucial element that enables the model to remember and replay sequences during in-context learning. Their role is pivotal, as they can help the model remember the previous word ‘cat’ and correctly predict or generate the next occurrence in a sentence like ‘Cat sat, the cat ran.’ Induction Head Detection, a subtopic of Feature Detection and Circuit Analysis, is dedicated to understanding how these individual attention heads contribute to the broader function of remembering sequences.

Image Resource — In-context learning and Induction Heads

In this project, I introduce an innovative approach to detect induction head-dynamic thresholding. By dynamically adjusting the threshold based on the attention score variance, I aim to identify these heads more accurately and gain deeper insights into how transformers remember the context. This approach is designed to automatically detect induction heads and thoroughly investigate and evaluate their contribution to the model’s decision-making process.

# Sample Code for Induction Head Detection
sequences = [
"the cat sat on the mat",
"the cat sat the cat sat",
"the quick brown fox jumps over the lazy dog",
"the quick brown fox jumps the quick brown fox jumps"
]

# Tokenize sequences and apply dynamic threshold detection
tokenized_sequences = [model.to_tokens(seq).to(device) for seq in sequences]
predicted_scores, thresholds = dynamic_threshold_detection(activations)
print(f"Predicted Induction Head Scores: {predicted_scores}")

QK Circuit Analysis

QK circuits (Query-Key Circuits) represent the fundamental mechanism by which transformers allocate attention among tokens in an input sequence. The Query and Key matrices help determine which parts of the input sequence the model should focus on, prioritizing relevant tokens and ignoring irrelevant ones. This technique is covered in Circuit Analysis and Layer-Based Analysis because it involves tracing the connections between the Query and Key matrices and understanding how attention patterns evolve across different layers of the transformer model.

Image Resource-Reversing Transformer to understand In-context Learning with Phase change & Feature dimensionality

This project uses causal interventions to analyze QK circuits. In this method, specific components of the circuit are changed or eliminated to observe how the model’s performance and attention allocation change. Understanding how these circuits affect attention patterns will reveal more profound insights into how transformers make decisions on complex tasks where attention allocation is critical.

# Sample Code for QK Circuit Analysis
def extract_qk_patterns(model, tokens, layer):
_, cache = model.run_with_cache(tokens)
Q = cache[f'blocks.{layer}.attn.hook_q']
K = cache[f'blocks.{layer}.attn.hook_k']
return Q, K

# Extract QK patterns and plot interactions for a sample sequence
Q, K = extract_qk_patterns(model, tokenized_sequences[0], 11)
plot_qk_interactions(Q, K, model.to_str_tokens(tokenized_sequences[0]), title="QK Interaction Heatmap for Sequence 1")

Existing Answers and Prior Research

Previous research on mechanistic interpretability has primarily focused on understanding attention heads in transformers. Let’s try to understand what this means!

Studies on Induction Heads show they help capture patterns that can be viewed as sequences. They also help models maintain context, especially in text-generation tasks.

Studies on QK Circuits examine how transformers allocate attention by analyzing interactions between Query and Key matrices. They also provide insights into how models prioritize tokens in decision-making tasks.

While significant work from OpenAI and Anthropic has contributed to understanding these internal mechanisms, the EleutherAI community has made significant strides in analyzing LLMs.

The Bigger Picture

As AI finds broad and practical applications in healthcare, law, education, and finance, understanding the mechanisms behind model decisions becomes increasingly critical. Mechanistic interpretability is one approach that offers a way to remove ambiguity from AI systems and ensure that models are robust, transparent, reliable, and aligned with human values. In this project, I offer a valuable perspective on this open-ended area of ​​AI interpretability by focusing on Induction Head Sensing and QK Circuit Analysis, and I am only showing one path to more transparent and reliable AI systems. I would also like to emphasize that this research can go much deeper.

Methods

The project aims to deepen our understanding of how transformer-based models like BERT and GPT-2 handle attentional mechanisms through two specific techniques. These approaches help provide insights into the model’s ability to allocate attention across tokens during sequence recall and text generation tasks.

Induction Head Detection

The goal is to automatically detect induction heads, specific attention heads responsible for capturing repeated subsequences during in-context learning. Induction heads are integral to the model’s ability to predict repeated patterns within text sequences.

Steps Taken:

  • Data Preparation: We created text sequences with clear and repeated patterns to highlight the induction head activity. Examples of these sequences include phrases such as “the cat sat cat sat,” which provide opportunities for induction heads to pick up on repetitive patterns.

According to Callum McDougall’s study, induction heads play an important role when models need to detect repeated patterns in text sequences. For example, when a model encounters a token like “James” and has previously seen the pair “James Bond” in the text, the induction head helps predict that “Bond” will follow. This behavior goes beyond simple memorization and becomes important for long-range pattern detection.

  • Dynamic Thresholding: We addressed the detection of induction heads by applying dynamic thresholding, which is adjusted according to the variance in attention scores across different layers. This method allowed us to better detect induction heads by tracking attention patterns, especially those responsible for in-context learning.

For example, in a sequence like “The quick brown fox…” followed later by “The quick dog jumps,” the induction head notices that the word “quick” was previously followed by “brown fox.” When it encounters “quick” again in the second part of the sequence, the induction head looks back at the earlier occurrence and predicts that something similar, like “brown fox,” might follow instead of “dog.” This demonstrates how the induction head uses the previous context to predict repeated patterns.

We applied dynamic thresholding based on the variance in attention scores across layers to detect such patterns in a transformer’s attention map. This technique helps highlight how specific heads focus on tokens that follow previous sequence occurrences, thereby allowing the model to maintain context and generate accurate predictions.

  • Attention Heatmaps: Attention heatmaps were used to visualize how these induction heads work. The heatmaps highlighted the induction head “ribbon,” where the attention pattern is sharply focused on the previously repeated subsequence. We observed how the model adjusted its focus when encountering repeated tokens by comparing heatmaps from different layers.

These visualizations confirmed how the induction heads track long-range dependencies between tokens. As in our example, the induction head detects the repeating pattern in the phrase “The quick” and shifts its attention to the “brown fox” that follows “quick” to predict that something like “brown fox” will follow the second token, “quick.” Visualizing these interactions across layers shows how the transformer remembers repeating subsequences and uses this information to make more accurate predictions. We can better understand how the model tracks context and learns long-term dependencies by detecting these attention patterns with dynamic thresholding.

  • Context Retention Score: The context retention score metric measures how well the model retains context. It evaluates how accurately the model predicts repeating patterns in the input sequence. The score indicates how much attention is paid to the correct token when a previously seen subsequence is encountered.

QK Circuit Analysis

This section aims to analyze QK circuits that can be used to determine how transformers allocate attention among symbols. This is essentially reverse engineering. This technical approach has provided insights into the symbols the model prioritizes and how it makes decisions when processing sequences.

Steps Taken:

  • QK Pattern Extraction: The first and most important step in analyzing QK circuits involves extracting the Query and Key matrices from specific transformer layers. By analyzing the dot product between these matrices, we observed how symbols in the input sequence attract attention. This becomes critical to understanding how attention patterns evolve as sequences get longer or more complex.

This process is called K-composition. It is defined as a combination of attentional heads across layers that guide the attention allocation process of the model. As in the example, the model’s attention head predicts that the second “quick” expression will be followed by an expression like “brown fox” by identifying repetitive patterns that follow the token “quick.” By reverse engineering these circuits, we analyzed how attention moves in sequence between tokens like “quick” and “brown fox” and how we use this information to predict future patterns.

This is illustrated diagram from a blog post from Callum McDougall.

Callum McDougall devised an illustrative method that simplifies the process of how transformers utilize K-composition within induction caps to identify repetitive sequences in text. McDougall elaborates on how the attention mechanism specifically targets tokens that conform to a previously observed pattern, for example, “[LA] at position N” followed by “[LB] at position N+1.” This approach empowers the model to accurately anticipate and replicate sequences within the text.

  • Causal Interventions: To deepen our understanding of how QK circuits affect model behavior, I sought to implement causal interventions, mainly through ablation. We could measure the impact on attention allocation and model performance by selectively removing or altering parts of the QK circuit. These ablations helped isolate specific tokens’ effects on the overall attention pattern.

The causal interventions aimed to disrupt these connections and observe how the model compensated for or shifted its attentional focus. This provided an important insight into how deeply QK circuits affect model decision-making.

  • Layer-Wise Analysis: A significant portion of the analysis focused on tracking how attention patterns evolved across different layers of the transformer model. A layer-wise analysis was performed to observe how attention shifted and merged across layers. This provided a hierarchical view of how attention to specific tokens shifted as the model processed longer sequences or more complex tasks.
  • Visualizing Attention Patterns: Visualizing attention patterns across layers was important for understanding how QK circuits dynamically allocate attention. These visualizations showed how attention shifted from one token to another and how multiple tokens competed for attention within the transformer architecture.

Comparative Analysis

This project aimed to compare the insights gained from two approaches to mechanical interpretability, Induction Head Detection, and QK Circuit Analysis, to provide a comprehensive inference of transformer behavior.

By comparing the perception of Induction Heads, which focuses on sequence memory and repetitions, with the QK circuits that manage attention allocation, we better understood how these two mechanisms interact within a transformer model. Both play essential roles but from different perspectives — induction heads help remember sequences, while QK circuits prioritize which tokens to focus on during inference. So it would not be wrong to say that we had to choose the one that suits our purpose.

Tools Used

  • TransformerLens Library: This library made loading and running HookedTransformer models like GPT-2 easy. It allowed us to connect to internal model layers to extract activations for analysis.
  • Attention Heatmaps and Visualizations: Using visualization tools such as Seaborn and Matplotlib, we created heatmaps to show the activation of induction heads and the evolution of attention patterns driven by QK circuits across layers.

Results

The project investigated key techniques, such as induction head detection and QK circuit analysis, to uncover the inner workings of transformer models like BERT and GPT-2. The results confirmed the importance of these mechanisms in improving context retention and attention allocation. The detailed findings from both methods are as follows:

Induction Head Detection

  • Dynamic Thresholding Success: This method effectively identified induction heads within the transformer model. We found it crucial for in-context learning, especially in sequences with repetitive patterns. For example, in the sequence “the cat sat cat sat,” induction heads focused on previously seen tokens, allowing the model to predict future tokens better.
  • Attention Heatmaps: Visualizations via attention heatmaps helped show how induction heads focus attention on tokens that follow a previously repeated token. This also revealed that the transformer relies on induction heads to maintain long-range dependencies and context. These results highlight that it is particularly effective in tasks where continuity is critical, such as text generation and dialogue systems.
  • Improved Context Retention: Induction heads strongly correlated with enhanced context retention in the model’s outputs. Tasks that required the model to remember sequences, such as summarization and repetitive text generation, showed noticeable performance improvements when these heads were enabled. The introduction of the context retention score helped provide a quantitative measure of how well the model maintained context across long sequences.

Explanation of the Results

Sequence 1: “the cat sat on the mat.”

  • Description: This sequence does not contain any repetition, making it a baseline for comparing the behavior of attention heads in non-repetitive contexts.
  • Activation Pattern: The diagonal pattern indicates that the attention heads focus on adjacent tokens, a typical behavior for simple sequence processing. There is no strong indication of induction behavior as expected.

Sequence 2: “The cat sat, the cat sat.”

  • Description: This sequence repeats the phrase “the cat sat,” making it a good candidate for detecting induction heads.
  • Activation Pattern: The strong activation along the diagonals at the repeating tokens (i.e., the second “the cat sat”) suggests that the attention head is focusing on the earlier instance of “the cat sat” to predict the following tokens. This pattern clearly indicates induction behavior, where the model leverages the repeated structure to improve its predictions.

Sequence 3: “the quick brown fox jumps over the lazy dog.”

  • Description: This is a unique sentence without repetition, providing a distinct structure for attention patterns.
  • Activation Pattern: Similar to Sequence 1, a diagonal pattern shows the model’s focus on the immediate following tokens. This indicates standard sequential processing without induction. The absence of off-diagonal solid activation shows that no repeated sequences are being leveraged.

Sequence 4: “the quick brown fox jumps the quick brown fox jumps.”

  • Description: This sequence includes a clear repetition of the phrase “the quick brown fox jumps,” which should trigger induction behavior.
  • Activation Pattern: The model shows clear off-diagonal attention, focusing on the first occurrence of the phrase “the quick brown fox jumps” when processing the second occurrence. This attention pattern demonstrates the model’s ability to use the previous context to make predictions, a hallmark of induction heads at work.

QK Circuit Analysis

  • Token Prioritization in QK Circuits: This method revealed how converters distributed attention among different tokens, with specific tokens receiving more focus based on their relevance to the task. By extracting and analyzing the Query and Key matrices, we observed which tokens were prioritized and how attention shifted across tokens over time. QK interaction heatmaps are a detailed representation of these attention patterns.
  • Causal Interventions: The application of causal interventions such as ablations showed how QK circuits are critical for maintaining context within transformers. Ablations of these circuits impaired the model’s ability to maintain context and negatively impacted tasks such as sentiment analysis and text classification.
  • Impact of Attention Manipulation: Visualizations after the interventions showed how manipulating attention circuits affected the model’s performance. Disabling specific QK circuits led to changes in attention allocation. It showed that these circuits determine which tokens to focus on and how information flows between layers. This insight is crucial and advisable for tasks requiring AI transparency, such as bias detection and fairness.

Explanation of Results

The visualizations show the QK interaction heatmap for four sequences, each showing how a token interacts with a transformer model according to its attention scores.

The heatmap aims to show the interaction between the Query Tokens (y-axis) and the Key Tokens (x-axis). Each cell in the heatmap shows the dot product of the Q and K vectors for a given token pair. The value provides us with the weight of attention between these tokens. Higher values ​​are shown in darker colors, indicating stronger interactions. Lower values, i.e., lighter shades of color, indicate weaker interactions.

This interaction, in accordance with the QK logic, helps the transformer decide which tokens to focus on when processing a sequence.

Sequence 1 — “the cat sat on the mat”:

  • The token “cat” interacts highly with the token “sat” (13,818), indicating that the model finds a strong connection between them.
  • The end of the sentence shows a significant interaction with “<|endoftext|>” (9.939 and 9.392). This indicates that the model understands the sentence boundary.
  • The word “mat” interacts positively with the preceding tokens, but its highest interaction is with “on” (5.160). This shows that the model understands the context.
Sequence 1 — “the cat sat on the mat”

Sequence 2 — “the cat sat the cat sat”:

  • The repetitive “cat” and “sat” patterns interact strongly throughout the sequence (The value of this interaction is 6.182).
  • The model shows us its ability to recognize the repetitive nature of the sequence and to identify and maintain context in repetitive patterns.
  • The word “sat” (Q) interacts strongly with the word “cat” (K) at several points (5.141 and 6.182). This shows us the role of induction heads in remembering and connecting repetitive subsequences.
Sequence 2 — “the cat sat the cat sat”

Sequence 3 — “the quick brown fox jumps over the lazy dog”:

  • We see that the token “quick” interacts strongly with “brown” (9,416 and 14,387) and “fox” (10,614). This shows us that the model identifies “the quick brown fox” as a coherent expression.
  • The word “lazy” interacts significantly with “dog” (11,594 and 15,300). This shows us that the model correctly associates these two tokens and understands the descriptive nature of the expression.
  • The model maintains context well and identifies key descriptive relationships across a more extended sequence of multiple expressions.
Sequence 3 — “the quick brown fox jumps over the lazy dog”

Sequence 4 — “the quick brown fox jumps the quick brown fox jumps”:

  • There are strong repetitive interactions between “quick” and “brown” (18,406) and “brown” and “fox” (10,170), which shows that the model is paying close attention to the repetitive pattern.
  • The results capture the sequence's repetitive structure well and show that the model’s induction heads recognize the repetitive pattern effectively.
  • In both instances, the word “jumps” interacts with the previous “quick brown fox,” demonstrating the model’s ability to connect repetitive elements across a more extended sequence.
Sequence 4 — “the quick brown fox jumps the quick brown fox jumps”

Overall Impact

  • The model used the induction heads to identify repetitive patterns, as seen in sequences 2 and 4.
  • The strong interactions between repetitive phrases also showed that the model can maintain context across repetitions.
  • Using QK Circuit Analysis, we can see how the model allocates attention across different sequence parts by looking at the heatmaps. This method helps us understand how the transformer model processes context and sentence structure.
  • These insights naturally lead one to conclude that the model can improve performance in tasks such as text generation, summarization, and dialogue systems, where preserving context and understanding repeated structures are crucial.
  • In addition, this analysis provides only one way to improve the interpretability of the model further. Providing better transparency and reliability for AI applications, especially in complex real-world scenarios, can be recommended.
  • Transformer Transparency and Reliability: Both techniques provide valuable insights into the internal mechanics of transformer models, making them more interpretable.
  • Applications in AI Security and Ethics: Understanding these internal mechanisms is a valuable research area for creating transparent, trustworthy models aligned with human values. The insights gained from this project can be applied to increase transparency in AI systems and make their use safer in critical areas such as healthcare, law, education, and other important decision-making systems.

Discussion

The results from the two fundamental techniques provide a concrete answer to our initial question of how mechanistic interpretability can be used to understand better and improve the behavior of transformer-based models. We identified induction heads using dynamic thresholding by revealing how transformers maintain context, particularly during in-context learning. This insight allowed us to link the model’s attention mechanisms to its ability to process repetitive sequences, improving our understanding of the model’s inner workings.

Similarly, analysis of QK circuits showed how transformers allocate attention across different tokens and prioritize information based on its relevance in the sequence. By applying causal interventions such as ablations, we could observe how disrupting these circuits affected the model’s performance on sentiment analysis and text classification tasks. These findings confirmed the hypothesis that transformers rely on specific circuits to maintain context and allocate attention.

Implications for AI Safety

These findings have important implications for the security of advanced AI systems. We confirmed our belief that “we can make these systems more transparent and predictable by gaining a clearer understanding of the internal mechanisms, such as induction heads and QK circuits, that drive a model’s behavior.” The ability to “zoom in” on specific components within the model can allow us to identify harmful biases or undesirable behaviors early on. These methods enable correcting such issues before deploying AI systems in critical and challenging applications, such as healthcare or legal decision-making.

Overall, the insights gained from this project contribute to building AI models that are not only robust but also more aligned with human values.

Future Work

While the project successfully answered the primary question by identifying induction heads and analyzing QK circuits, there is potential for further research in several crucial areas. Some of these include:

  • Experimenting on Larger Transformer Architectures: Future research could test these methods on more complex models, such as GPT-3 or T5. The similarity of the results is debatable.
  • Fine-Tuning Causal Interventions: Future work could include implementing more precise causal interventions, such as targeted ablations or fine-tuning specific attention heads. This could provide even deeper insights into how individual components of the transformer model affect behavior and output.
  • Fairness and Bias Analysis: One related question from this project is how mechanistic interpretability techniques can be applied to real-world applications, such as ensuring fairness in AI systems. Future research could, therefore, focus on investigating how induction heads and QK circuits contribute to potential biases and how changing them could reduce undesirable behaviors.
  • Exploring Cross-Modal Transformers: Another exciting direction for future work could be to examine cross-modal transformators that address vision and language tasks. Understanding how these models allocate attention across modalities could reveal more insights into model behavior and improve interpretability. Or at least, how they do so could be discussed.

The areas I have listed are just a few possibilities for future research that could help improve mechanistic interpretability, thus making AI systems more transparent, trustworthy, and alignable with human values.

--

--

Ayyüce Kızrak, Ph.D.
Ayyüce Kızrak, Ph.D.

Written by Ayyüce Kızrak, Ph.D.

AI Specialist @Digital Transformation Office, Presidency of the Republic of Türkiye | Academics @Bahçeşehir University | http://www.ayyucekizrak.com/

Responses (2)