timlawrenz commited on
Commit
e2a5f5d
·
verified ·
1 Parent(s): 4d2e4b4

Upload src/data_processing.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. src/data_processing.py +1461 -0
src/data_processing.py ADDED
@@ -0,0 +1,1461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Data processing utilities for Ruby method datasets.
3
+
4
+ This module provides functions to load, preprocess, and prepare Ruby method
5
+ data for GNN training. Includes custom Dataset class for AST to graph conversion.
6
+ """
7
+
8
+ import json
9
+ import random
10
+ import os
11
+ import logging
12
+ from pathlib import Path
13
+ from typing import List, Dict, Any, Tuple, Optional, Union
14
+ try:
15
+ import torch
16
+ from torch_geometric.data import Data
17
+ TORCH_AVAILABLE = True
18
+ except ImportError:
19
+ TORCH_AVAILABLE = False
20
+
21
+
22
+ def load_methods_json(filepath: str) -> List[Dict[str, Any]]:
23
+ """
24
+ Load Ruby methods from JSON file.
25
+
26
+ Args:
27
+ filepath: Path to the JSON file containing method data
28
+
29
+ Returns:
30
+ List of method dictionaries
31
+ """
32
+ with open(filepath, 'r') as f:
33
+ return json.load(f)
34
+
35
+
36
+ def methods_to_dataframe(methods: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
37
+ """
38
+ Convert list of method dictionaries to a structured format.
39
+
40
+ Args:
41
+ methods: List of method dictionaries
42
+
43
+ Returns:
44
+ List of method dictionaries (pass-through for compatibility)
45
+ """
46
+ return methods
47
+
48
+
49
+ def filter_methods_by_length(methods: List[Dict[str, Any]], min_lines: int = 5, max_lines: int = 100) -> List[Dict[str, Any]]:
50
+ """
51
+ Filter methods by source code length.
52
+
53
+ Args:
54
+ methods: List of method dictionaries
55
+ min_lines: Minimum number of lines
56
+ max_lines: Maximum number of lines
57
+
58
+ Returns:
59
+ Filtered list of methods
60
+ """
61
+ filtered = []
62
+ for method in methods:
63
+ if 'raw_source' in method:
64
+ line_count = len(method['raw_source'].split('\n'))
65
+ if min_lines <= line_count <= max_lines:
66
+ method['line_count'] = line_count
67
+ filtered.append(method)
68
+ return filtered
69
+ """
70
+ Filter methods by source code length.
71
+
72
+ Args:
73
+ df: DataFrame containing method data
74
+ min_lines: Minimum number of lines
75
+ max_lines: Maximum number of lines
76
+
77
+ Returns:
78
+ Filtered DataFrame
79
+ """
80
+ df['line_count'] = df['raw_source'].apply(lambda x: len(x.split('\n')))
81
+ return df[(df['line_count'] >= min_lines) & (df['line_count'] <= max_lines)]
82
+
83
+
84
+ class ASTNodeEncoder:
85
+ """
86
+ Encoder for mapping AST node types to feature vectors.
87
+
88
+ This class maintains a vocabulary of AST node types found in Ruby code
89
+ and maps them to dense feature vectors for GNN processing.
90
+ """
91
+
92
+ def __init__(self):
93
+ """Initialize the node encoder with common Ruby AST node types."""
94
+ # Common Ruby AST node types based on the parser gem
95
+ self.node_types = [
96
+ 'def', 'defs', 'args', 'arg', 'begin', 'end', 'lvasgn', 'ivasgn', 'gvasgn',
97
+ 'cvasgn', 'send', 'block', 'if', 'unless', 'while', 'until', 'for', 'case',
98
+ 'when', 'rescue', 'ensure', 'retry', 'break', 'next', 'redo', 'return',
99
+ 'yield', 'super', 'zsuper', 'lambda', 'proc', 'and', 'or', 'not', 'true',
100
+ 'false', 'nil', 'self', 'int', 'float', 'str', 'sym', 'regexp', 'array',
101
+ 'hash', 'pair', 'splat', 'kwsplat', 'block_pass', 'const', 'cbase',
102
+ 'lvar', 'ivar', 'gvar', 'cvar', 'casgn', 'masgn', 'mlhs', 'op_asgn',
103
+ 'and_asgn', 'or_asgn', 'back_ref', 'nth_ref', 'class', 'sclass', 'module',
104
+ 'defined?', 'alias', 'undef', 'range', 'irange', 'erange', 'regopt'
105
+ ]
106
+
107
+ # Create mapping from node type to index
108
+ self.type_to_idx = {node_type: idx for idx, node_type in enumerate(self.node_types)}
109
+ self.unknown_idx = len(self.node_types) # Index for unknown node types
110
+ self.vocab_size = len(self.node_types) + 1 # +1 for unknown
111
+
112
+ def encode_node_type(self, node_type: str) -> int:
113
+ """
114
+ Encode a node type to its integer index.
115
+
116
+ Args:
117
+ node_type: The AST node type string
118
+
119
+ Returns:
120
+ Integer index for the node type
121
+ """
122
+ return self.type_to_idx.get(node_type, self.unknown_idx)
123
+
124
+ def create_node_features(self, node_type: str) -> List[float]:
125
+ """
126
+ Create feature vector for a node type.
127
+
128
+ Args:
129
+ node_type: The AST node type string
130
+
131
+ Returns:
132
+ Feature vector as list of floats
133
+ """
134
+ # Simple one-hot encoding for now
135
+ features = [0.0] * self.vocab_size
136
+ idx = self.encode_node_type(node_type)
137
+ features[idx] = 1.0
138
+ return features
139
+
140
+
141
+ class ASTGraphConverter:
142
+ """
143
+ Converter for transforming AST JSON to graph representation.
144
+
145
+ This class parses the AST JSON structure and converts it into
146
+ a graph format suitable for GNN processing.
147
+ """
148
+
149
+ def __init__(self):
150
+ """Initialize the AST to graph converter."""
151
+ self.node_encoder = ASTNodeEncoder()
152
+ self.reset()
153
+
154
+ def reset(self):
155
+ """Reset the converter state for processing a new AST."""
156
+ self.nodes = [] # List of node features
157
+ self.edges = [] # List of edge tuples (parent_idx, child_idx)
158
+ self.edge_attrs = [] # List of edge attributes [child_index, depth, num_siblings]
159
+ self.node_depths = [] # Depth of each node in the tree
160
+ self.node_child_indices = [] # Position of each node among its siblings
161
+ self.node_count = 0
162
+
163
+ def parse_ast_json(self, ast_json: str) -> Dict[str, Any]:
164
+ """
165
+ Parse AST JSON string and convert to graph representation.
166
+
167
+ Args:
168
+ ast_json: JSON string representing the AST
169
+
170
+ Returns:
171
+ Dictionary containing node features, edge indices, and edge attributes.
172
+ edge_attr contains [child_index, depth, num_siblings] per edge.
173
+ node_pos contains [child_index, depth] per node for positional encoding.
174
+ """
175
+ self.reset()
176
+
177
+ try:
178
+ ast_data = json.loads(ast_json)
179
+ self._process_node(ast_data, parent_idx=None, depth=0, child_index=0, num_siblings=1)
180
+
181
+ # Convert to appropriate format
182
+ if not self.nodes:
183
+ # Handle empty AST case
184
+ node_features = [[0.0] * self.node_encoder.vocab_size]
185
+ edge_index = [[], []] # Empty edge list
186
+ edge_attr = []
187
+ node_pos = [[0, 0]]
188
+ else:
189
+ node_features = self.nodes
190
+ if self.edges:
191
+ # Transpose edge list to [2, num_edges] format
192
+ edge_index = [[], []]
193
+ for parent, child in self.edges:
194
+ edge_index[0].append(parent)
195
+ edge_index[1].append(child)
196
+ else:
197
+ edge_index = [[], []]
198
+ edge_attr = self.edge_attrs
199
+ node_pos = list(zip(self.node_child_indices, self.node_depths))
200
+
201
+ return {
202
+ 'x': node_features,
203
+ 'edge_index': edge_index,
204
+ 'edge_attr': edge_attr,
205
+ 'node_pos': node_pos,
206
+ 'num_nodes': len(self.nodes) if self.nodes else 1
207
+ }
208
+
209
+ except (json.JSONDecodeError, Exception):
210
+ # Handle malformed JSON or other errors gracefully
211
+ return {
212
+ 'x': [[0.0] * self.node_encoder.vocab_size],
213
+ 'edge_index': [[], []],
214
+ 'edge_attr': [],
215
+ 'node_pos': [[0, 0]],
216
+ 'num_nodes': 1
217
+ }
218
+
219
+ def _process_node(self, node: Union[Dict, List, str, int, float, None],
220
+ parent_idx: Optional[int] = None, depth: int = 0,
221
+ child_index: int = 0, num_siblings: int = 1) -> int:
222
+ """
223
+ Recursively process an AST node and its children.
224
+
225
+ Args:
226
+ node: The AST node (dict, list, or primitive)
227
+ parent_idx: Index of the parent node
228
+ depth: Depth of the current node in the AST
229
+ child_index: Position of this node among its siblings (0-based)
230
+ num_siblings: Total number of siblings (including this node)
231
+
232
+ Returns:
233
+ Index of the current node
234
+ """
235
+ if isinstance(node, dict) and 'type' in node:
236
+ # This is an AST node with a type
237
+ node_type = node['type']
238
+ current_idx = self.node_count
239
+ self.node_count += 1
240
+
241
+ # Create node features
242
+ features = self.node_encoder.create_node_features(node_type)
243
+ self.nodes.append(features)
244
+ self.node_depths.append(depth)
245
+ self.node_child_indices.append(child_index)
246
+
247
+ # Add edge from parent to current node
248
+ if parent_idx is not None:
249
+ self.edges.append((parent_idx, current_idx))
250
+ self.edge_attrs.append([child_index, depth, num_siblings])
251
+
252
+ # Process children with positional information
253
+ if 'children' in node:
254
+ children = node['children']
255
+ n_children = len(children)
256
+ for i, child in enumerate(children):
257
+ self._process_node(child, current_idx, depth=depth + 1,
258
+ child_index=i, num_siblings=n_children)
259
+
260
+ return current_idx
261
+
262
+ elif isinstance(node, list):
263
+ # Process list of nodes
264
+ n_items = len(node)
265
+ for i, child in enumerate(node):
266
+ self._process_node(child, parent_idx, depth=depth,
267
+ child_index=i, num_siblings=n_items)
268
+ return parent_idx if parent_idx is not None else -1
269
+
270
+ else:
271
+ # Leaf node (string, int, float, None)
272
+ if parent_idx is not None:
273
+ current_idx = self.node_count
274
+ self.node_count += 1
275
+
276
+ # Create a generic leaf node
277
+ leaf_type = 'leaf_' + type(node).__name__
278
+ features = self.node_encoder.create_node_features(leaf_type)
279
+ self.nodes.append(features)
280
+ self.node_depths.append(depth)
281
+ self.node_child_indices.append(child_index)
282
+
283
+ # Add edge from parent to leaf
284
+ self.edges.append((parent_idx, current_idx))
285
+ self.edge_attrs.append([child_index, depth, num_siblings])
286
+
287
+ return current_idx
288
+ return -1
289
+
290
+
291
+ def load_jsonl_file(filepath: str, limit: Optional[int] = None) -> List[Dict[str, Any]]:
292
+ """
293
+ Load data from a JSONL file.
294
+
295
+ Args:
296
+ filepath: Path to the JSONL file
297
+ limit: Optional maximum number of lines to load.
298
+
299
+ Returns:
300
+ List of dictionaries from the JSONL file
301
+ """
302
+ data = []
303
+ with open(filepath, 'r', encoding='utf-8') as f:
304
+ for i, line in enumerate(f):
305
+ if limit is not None and i >= limit:
306
+ break
307
+ line = line.strip()
308
+ if line:
309
+ try:
310
+ data.append(json.loads(line))
311
+ except json.JSONDecodeError:
312
+ continue # Skip malformed lines
313
+ return data
314
+
315
+
316
+ class RubyASTDataset:
317
+ """
318
+ Dataset class for loading Ruby AST data and converting to graph format.
319
+
320
+ This class loads JSONL files containing Ruby method data and converts
321
+ the AST representations to graph objects suitable for GNN training.
322
+ """
323
+
324
+ def __init__(self, jsonl_path: str, transform=None, limit: Optional[int] = None):
325
+ """
326
+ Initialize the dataset.
327
+
328
+ Args:
329
+ jsonl_path: Path to the JSONL file containing method data
330
+ transform: Optional transform to apply to each sample
331
+ limit: Optional maximum number of samples to load.
332
+ """
333
+ self.jsonl_path = jsonl_path
334
+ self.transform = transform
335
+ self.converter = ASTGraphConverter()
336
+
337
+ # Load the data
338
+ self.data = load_jsonl_file(jsonl_path, limit=limit)
339
+
340
+ print(f"Loaded {len(self.data)} samples from {jsonl_path}")
341
+
342
+ def __len__(self) -> int:
343
+ """Return the number of samples in the dataset."""
344
+ return len(self.data)
345
+
346
+ def __getitem__(self, idx: int) -> Dict[str, Any]:
347
+ """
348
+ Get a sample from the dataset.
349
+
350
+ Args:
351
+ idx: Index of the sample
352
+
353
+ Returns:
354
+ Dictionary containing graph data and target
355
+ """
356
+ if idx < 0 or idx >= len(self.data):
357
+ raise IndexError(f"Index {idx} out of range for dataset of size {len(self.data)}")
358
+
359
+ sample = self.data[idx]
360
+
361
+ # Convert AST to graph
362
+ graph_data = self.converter.parse_ast_json(sample['ast_json'])
363
+
364
+ # Create the data object
365
+ result = {
366
+ 'x': graph_data['x'],
367
+ 'edge_index': graph_data['edge_index'],
368
+ 'y': [sample.get('complexity_score', 5.0)], # Default complexity score if missing
369
+ 'num_nodes': graph_data['num_nodes'],
370
+ 'id': sample.get('id', f'sample_{idx}'),
371
+ 'repo_name': sample.get('repo_name', ''),
372
+ 'file_path': sample.get('file_path', '')
373
+ }
374
+
375
+ # Apply transform if provided
376
+ if self.transform:
377
+ result = self.transform(result)
378
+
379
+ return result
380
+
381
+ def get_feature_dim(self) -> int:
382
+ """Return the dimension of node features."""
383
+ return self.converter.node_encoder.vocab_size
384
+
385
+
386
+ def collate_graphs(batch: List[Dict[str, Any]]) -> Dict[str, Any]:
387
+ """
388
+ Collate function for batching graph data.
389
+
390
+ Args:
391
+ batch: List of graph data dictionaries
392
+
393
+ Returns:
394
+ Batched graph data
395
+ """
396
+ if not batch:
397
+ raise ValueError("Cannot collate empty batch")
398
+
399
+ # Collect all node features and edge indices
400
+ all_x = []
401
+ all_edge_index = [[], []] # [source_nodes, target_nodes]
402
+ all_y = []
403
+ batch_idx = []
404
+ node_offset = 0
405
+
406
+ metadata = {
407
+ 'ids': [],
408
+ 'repo_names': [],
409
+ 'file_paths': []
410
+ }
411
+
412
+ for i, sample in enumerate(batch):
413
+ # Node features
414
+ all_x.extend(sample['x'])
415
+
416
+ # Edge indices (offset by current node count)
417
+ edges = sample['edge_index']
418
+ if len(edges[0]) > 0: # Only offset if there are edges
419
+ for j in range(len(edges[0])):
420
+ all_edge_index[0].append(edges[0][j] + node_offset)
421
+ all_edge_index[1].append(edges[1][j] + node_offset)
422
+
423
+ # Target values
424
+ all_y.extend(sample['y'])
425
+
426
+ # Batch indices for each node
427
+ num_nodes = sample['num_nodes']
428
+ batch_idx.extend([i] * num_nodes)
429
+ node_offset += num_nodes
430
+
431
+ # Metadata
432
+ metadata['ids'].append(sample['id'])
433
+ metadata['repo_names'].append(sample['repo_name'])
434
+ metadata['file_paths'].append(sample['file_path'])
435
+
436
+ return {
437
+ 'x': all_x,
438
+ 'edge_index': all_edge_index,
439
+ 'y': all_y,
440
+ 'batch': batch_idx,
441
+ 'num_graphs': len(batch),
442
+ 'metadata': metadata
443
+ }
444
+
445
+
446
+ class SimpleDataLoader:
447
+ """
448
+ Simple DataLoader implementation for batching data.
449
+
450
+ This provides a basic implementation that can be used when PyTorch
451
+ DataLoader is not available, and can easily be replaced with the real
452
+ PyTorch DataLoader when dependencies are installed.
453
+ """
454
+
455
+ def __init__(self, dataset, batch_size: int = 1, shuffle: bool = False, collate_fn=None):
456
+ """
457
+ Initialize the DataLoader.
458
+
459
+ Args:
460
+ dataset: Dataset to load from
461
+ batch_size: Number of samples per batch
462
+ shuffle: Whether to shuffle the data
463
+ collate_fn: Function to collate samples into batches
464
+ """
465
+ self.dataset = dataset
466
+ self.batch_size = batch_size
467
+ self.shuffle = shuffle
468
+ self.collate_fn = collate_fn or collate_graphs
469
+
470
+ # Create indices
471
+ self.indices = list(range(len(dataset)))
472
+ if shuffle:
473
+ import random
474
+ random.shuffle(self.indices)
475
+
476
+ def __len__(self) -> int:
477
+ """Return number of batches."""
478
+ return (len(self.dataset) + self.batch_size - 1) // self.batch_size
479
+
480
+ def __iter__(self):
481
+ """Iterate over batches."""
482
+ for i in range(0, len(self.dataset), self.batch_size):
483
+ batch_indices = self.indices[i:i + self.batch_size]
484
+ batch = [self.dataset[idx] for idx in batch_indices]
485
+ yield self.collate_fn(batch)
486
+
487
+
488
+ class PairedDataset:
489
+ """
490
+ Dataset class for loading paired Ruby AST and text description data.
491
+
492
+ This class loads the paired_data.jsonl file containing Ruby method data
493
+ and converts AST representations to graph objects paired with text descriptions.
494
+ For each method, it randomly samples one description from the available descriptions.
495
+ """
496
+
497
+ def __init__(self, jsonl_path: str, transform=None, seed: Optional[int] = None, limit: Optional[int] = None):
498
+ """
499
+ Initialize the paired dataset.
500
+
501
+ Args:
502
+ jsonl_path: Path to the paired_data.jsonl file
503
+ transform: Optional transform to apply to each sample
504
+ seed: Random seed for consistent description sampling
505
+ limit: Optional maximum number of samples to load.
506
+ """
507
+ self.jsonl_path = jsonl_path
508
+ self.transform = transform
509
+ self.converter = ASTGraphConverter()
510
+
511
+ if seed is not None:
512
+ random.seed(seed)
513
+
514
+ # Load the data
515
+ self.data = load_jsonl_file(jsonl_path, limit=limit)
516
+
517
+ print(f"Loaded {len(self.data)} samples from {jsonl_path}")
518
+
519
+ def __len__(self) -> int:
520
+ """Return the number of samples in the dataset."""
521
+ return len(self.data)
522
+
523
+ def __getitem__(self, idx: int) -> Tuple[Dict[str, Any], str]:
524
+ """
525
+ Get a sample from the dataset.
526
+
527
+ Args:
528
+ idx: Index of the sample
529
+
530
+ Returns:
531
+ Tuple of (graph_data, text_description)
532
+ """
533
+ if idx < 0 or idx >= len(self.data):
534
+ raise IndexError(f"Index {idx} out of range for dataset of size {len(self.data)}")
535
+
536
+ sample = self.data[idx]
537
+
538
+ # Convert AST to graph
539
+ graph_data = self.converter.parse_ast_json(sample['ast_json'])
540
+
541
+ # Randomly sample one description
542
+ descriptions = sample.get('descriptions', [])
543
+ if descriptions:
544
+ description = random.choice(descriptions)
545
+ text_description = description['text']
546
+ else:
547
+ # Fallback to method name if no descriptions available
548
+ text_description = sample.get('method_name', 'unknown_method')
549
+
550
+ # Create the graph data object
551
+ graph_result = {
552
+ 'x': graph_data['x'],
553
+ 'edge_index': graph_data['edge_index'],
554
+ 'num_nodes': graph_data['num_nodes'],
555
+ 'id': sample.get('id', f'sample_{idx}'),
556
+ 'repo_name': sample.get('repo_name', ''),
557
+ 'file_path': sample.get('file_path', '')
558
+ }
559
+
560
+ # Apply transform if provided
561
+ if self.transform:
562
+ graph_result = self.transform(graph_result)
563
+
564
+ return graph_result, text_description
565
+
566
+ def get_feature_dim(self) -> int:
567
+ """Return the dimension of node features."""
568
+ return self.converter.node_encoder.vocab_size
569
+
570
+
571
+ def collate_paired_data(batch: List[Tuple[Dict[str, Any], str]]) -> Tuple[Dict[str, Any], List[str]]:
572
+ """
573
+ Collate function for batching paired graph and text data.
574
+
575
+ Args:
576
+ batch: List of (graph_data, text_description) tuples
577
+
578
+ Returns:
579
+ Tuple of (batched_graph_data, list_of_text_descriptions)
580
+ """
581
+ if not batch:
582
+ raise ValueError("Cannot collate empty batch")
583
+
584
+ # Separate graph data and text descriptions
585
+ graph_batch = [item[0] for item in batch]
586
+ text_batch = [item[1] for item in batch]
587
+
588
+ # Collate graph data manually (similar to collate_graphs but without 'y' field)
589
+ all_x = []
590
+ all_edge_index = [[], []] # [source_nodes, target_nodes]
591
+ batch_idx = []
592
+ node_offset = 0
593
+
594
+ metadata = {
595
+ 'ids': [],
596
+ 'repo_names': [],
597
+ 'file_paths': []
598
+ }
599
+
600
+ for i, sample in enumerate(graph_batch):
601
+ # Node features
602
+ all_x.extend(sample['x'])
603
+
604
+ # Edge indices (offset by current node count)
605
+ edges = sample['edge_index']
606
+ if len(edges[0]) > 0: # Only offset if there are edges
607
+ for j in range(len(edges[0])):
608
+ all_edge_index[0].append(edges[0][j] + node_offset)
609
+ all_edge_index[1].append(edges[1][j] + node_offset)
610
+
611
+ # Batch indices for each node
612
+ num_nodes = sample['num_nodes']
613
+ batch_idx.extend([i] * num_nodes)
614
+ node_offset += num_nodes
615
+
616
+ # Metadata
617
+ metadata['ids'].append(sample['id'])
618
+ metadata['repo_names'].append(sample['repo_name'])
619
+ metadata['file_paths'].append(sample['file_path'])
620
+
621
+ batched_graphs = {
622
+ 'x': all_x,
623
+ 'edge_index': all_edge_index,
624
+ 'batch': batch_idx,
625
+ 'num_graphs': len(batch),
626
+ 'metadata': metadata
627
+ }
628
+
629
+ return batched_graphs, text_batch
630
+
631
+
632
+ class PairedDataLoader:
633
+ """
634
+ DataLoader for paired graph and text data.
635
+
636
+ Extends SimpleDataLoader to handle paired (graph, text) data.
637
+ """
638
+
639
+ def __init__(self, dataset, batch_size: int = 1, shuffle: bool = False):
640
+ """
641
+ Initialize the PairedDataLoader.
642
+
643
+ Args:
644
+ dataset: PairedDataset to load from
645
+ batch_size: Number of samples per batch
646
+ shuffle: Whether to shuffle the data
647
+ """
648
+ self.dataset = dataset
649
+ self.batch_size = batch_size
650
+ self.shuffle = shuffle
651
+
652
+ # Create indices
653
+ self.indices = list(range(len(dataset)))
654
+ if shuffle:
655
+ random.shuffle(self.indices)
656
+
657
+ def __len__(self) -> int:
658
+ """Return number of batches."""
659
+ return (len(self.dataset) + self.batch_size - 1) // self.batch_size
660
+
661
+ def __iter__(self):
662
+ """Iterate over batches."""
663
+ for i in range(0, len(self.dataset), self.batch_size):
664
+ batch_indices = self.indices[i:i + self.batch_size]
665
+ batch = [self.dataset[idx] for idx in batch_indices]
666
+ yield collate_paired_data(batch)
667
+
668
+
669
+
670
+ class PrecomputedRubyASTDataset:
671
+ """
672
+ Dataset class for loading precomputed Ruby AST graph data.
673
+
674
+ This class can load .pt files containing pre-converted PyTorch Geometric
675
+ Data objects for speed, but also supports processing .jsonl files as a fallback.
676
+ """
677
+
678
+ def __init__(self, path: str, transform=None):
679
+ """
680
+ Initialize the dataset.
681
+
682
+ Args:
683
+ path: Path to the .pt or .jsonl file containing graph data.
684
+ transform: Optional transform to apply to each sample.
685
+ """
686
+ self.path = path
687
+ self.transform = transform
688
+
689
+ if not TORCH_AVAILABLE:
690
+ raise ImportError("PyTorch and PyG are required for this dataset.")
691
+
692
+ if path.endswith('.pt'):
693
+ # Load the precomputed data into RAM
694
+ self.data = torch.load(path, weights_only=False)
695
+ print(f"Loaded {len(self.data)} precomputed graphs from {path}")
696
+ elif path.endswith('.jsonl'):
697
+ print(f"Processing JSONL file into graphs: {path}")
698
+ jsonl_data = load_jsonl_file(path)
699
+ converter = ASTGraphConverter()
700
+ self.data = []
701
+ for sample in jsonl_data:
702
+ graph_data = converter.parse_ast_json(sample['ast_json'])
703
+
704
+ x = torch.tensor(graph_data['x'], dtype=torch.float)
705
+ edge_index = torch.tensor(graph_data['edge_index'], dtype=torch.long)
706
+ y = torch.tensor([sample.get('complexity_score', 5.0)], dtype=torch.float)
707
+
708
+ data_obj = Data(x=x, edge_index=edge_index, y=y)
709
+
710
+ # Add positional attributes — always set so PyG collation is consistent
711
+ ea = graph_data.get('edge_attr', [])
712
+ data_obj.edge_attr = torch.tensor(
713
+ ea if ea else [], dtype=torch.float,
714
+ ).reshape(-1, 3) if ea else torch.zeros((0, 3), dtype=torch.float)
715
+
716
+ np_ = graph_data.get('node_pos', [])
717
+ data_obj.node_pos = torch.tensor(
718
+ np_ if np_ else [[0, 0]], dtype=torch.float,
719
+ )
720
+
721
+ self.data.append(data_obj)
722
+ print(f"Converted {len(self.data)} graphs from {path}")
723
+ else:
724
+ raise ValueError(f"Unsupported file type: {path}. Please provide a .pt or .jsonl file.")
725
+
726
+ def __len__(self) -> int:
727
+ """Return the number of samples in the dataset."""
728
+ return len(self.data)
729
+
730
+ def __getitem__(self, idx: int):
731
+ """
732
+ Get a sample from the dataset.
733
+
734
+ Args:
735
+ idx: Index of the sample
736
+
737
+ Returns:
738
+ PyTorch Geometric Data object
739
+ """
740
+ if idx < 0 or idx >= len(self.data):
741
+ raise IndexError(f"Index {idx} out of range for dataset of size {len(self.data)}")
742
+
743
+ sample = self.data[idx]
744
+
745
+ if self.transform:
746
+ sample = self.transform(sample)
747
+
748
+ return sample
749
+
750
+
751
+ class PreCollatedDataset:
752
+ """
753
+ Dataset class for loading pre-collated batches of graph data.
754
+
755
+ This class loads a .pt file where each item is an already-collated
756
+ `torch_geometric.data.Batch` object. This is the most efficient
757
+ way to load data as it eliminates all real-time collation overhead.
758
+ """
759
+ def __init__(self, pt_path: str):
760
+ """
761
+ Initialize the dataset.
762
+
763
+ Args:
764
+ pt_path: Path to the .pt file containing pre-collated batches.
765
+ """
766
+ # Load the list of pre-collated batches into RAM
767
+ self.batches = torch.load(pt_path, weights_only=False)
768
+ print(f"Loaded {len(self.batches)} pre-collated batches from {pt_path}")
769
+
770
+ def __len__(self):
771
+ return len(self.batches)
772
+
773
+ def __getitem__(self, idx):
774
+ return self.batches[idx]
775
+
776
+
777
+ def create_data_loaders(train_path: str, val_path: str, batch_size: int = 32, shuffle: bool = True, num_workers: Optional[int] = None, pre_collated: bool = False):
778
+ """
779
+ Create train and validation data loaders.
780
+
781
+ Supports two modes:
782
+ 1. Standard loading from a dataset of individual graphs (`pre_collated=False`).
783
+ This uses a PyG DataLoader to perform real-time batching.
784
+ 2. Pre-collated loading from a dataset of pre-batched graphs (`pre_collated=True`).
785
+ This is the most performant option, as it has near-zero CPU overhead.
786
+
787
+ Args:
788
+ train_path: Path to training .pt file.
789
+ val_path: Path to validation .pt file.
790
+ batch_size: Batch size (used only if `pre_collated=False`).
791
+ shuffle: Whether to shuffle training data.
792
+ num_workers: Number of workers for data loading (used only if `pre_collated=False`).
793
+ pre_collated: Whether the dataset files contain pre-collated batches.
794
+
795
+ Returns:
796
+ Tuple of (train_loader, val_loader)
797
+ """
798
+ if not TORCH_AVAILABLE:
799
+ raise ImportError("PyTorch is required to create data loaders.")
800
+
801
+ if pre_collated:
802
+ # --- Pre-collated path (most efficient) ---
803
+ train_dataset = PreCollatedDataset(train_path)
804
+ val_dataset = PreCollatedDataset(val_path)
805
+
806
+ # The collate_fn simply returns the already-collated batch.
807
+ # The input `batch` is a list of size 1 containing our pre-made Batch object.
808
+ collate_fn = lambda x: x[0]
809
+
810
+ # DataLoader is just a simple iterator here, no real collation work.
811
+ # num_workers > 0 can actually be slower due to overhead of sending
812
+ # already-large batches between processes.
813
+ from torch.utils.data import DataLoader
814
+ train_loader = DataLoader(train_dataset, batch_size=1, shuffle=shuffle, num_workers=0, collate_fn=collate_fn)
815
+ val_loader = DataLoader(val_dataset, batch_size=1, shuffle=False, num_workers=0, collate_fn=collate_fn)
816
+
817
+ print("✅ Using pre-collated data loader (maximum performance).")
818
+
819
+ else:
820
+ # --- Standard real-time collation path ---
821
+ from torch_geometric.loader import DataLoader
822
+ train_dataset = PrecomputedRubyASTDataset(train_path)
823
+ val_dataset = PrecomputedRubyASTDataset(val_path)
824
+
825
+ if num_workers is None:
826
+ num_workers = os.cpu_count()
827
+
828
+ train_loader = DataLoader(
829
+ train_dataset,
830
+ batch_size=batch_size,
831
+ shuffle=shuffle,
832
+ num_workers=num_workers,
833
+ pin_memory=torch.cuda.is_available(),
834
+ persistent_workers=num_workers > 0
835
+ )
836
+ val_loader = DataLoader(
837
+ val_dataset,
838
+ batch_size=batch_size,
839
+ shuffle=False,
840
+ num_workers=num_workers,
841
+ pin_memory=torch.cuda.is_available(),
842
+ persistent_workers=num_workers > 0
843
+ )
844
+
845
+ print(f"✅ Using standard PyG DataLoader with {num_workers} workers.")
846
+
847
+ return train_loader, val_loader
848
+
849
+
850
+
851
+ def create_paired_data_loaders(paired_data_path: str, batch_size: int = 32, shuffle: bool = True, seed: Optional[int] = None):
852
+ """
853
+ Create data loader for paired graph and text data.
854
+
855
+ Args:
856
+ paired_data_path: Path to paired_data.jsonl file
857
+ batch_size: Batch size for the loader
858
+ shuffle: Whether to shuffle the data
859
+ seed: Random seed for consistent description sampling
860
+
861
+ Returns:
862
+ PairedDataLoader instance
863
+ """
864
+ dataset = PairedDataset(paired_data_path, seed=seed)
865
+ loader = PairedDataLoader(dataset, batch_size=batch_size, shuffle=shuffle)
866
+
867
+ return loader
868
+
869
+
870
+ class AutoregressiveASTDataset:
871
+ """
872
+ Dataset class for autoregressive AST generation training.
873
+
874
+ This class loads paired Ruby AST and text description data and converts
875
+ each AST into a sequence of (partial_graph, target_node) pairs for
876
+ autoregressive training. Each method generates multiple training examples.
877
+ """
878
+
879
+ def __init__(self, paired_data_path: str, max_sequence_length: int = 50, seed: Optional[int] = None,
880
+ precomputed_embeddings_path: Optional[str] = None):
881
+ """
882
+ Initialize the autoregressive dataset.
883
+
884
+ Args:
885
+ paired_data_path: Path to the paired_data.jsonl file
886
+ max_sequence_length: Maximum number of nodes per sequence
887
+ seed: Random seed for consistent description sampling
888
+ precomputed_embeddings_path: Path to pre-computed text embeddings file (optional)
889
+ """
890
+ self.paired_data_path = paired_data_path
891
+ self.max_sequence_length = max_sequence_length
892
+ self.converter = ASTGraphConverter()
893
+
894
+ if seed is not None:
895
+ random.seed(seed)
896
+
897
+ # Load pre-computed embeddings if available
898
+ self.precomputed_embeddings = {}
899
+ if precomputed_embeddings_path and os.path.exists(precomputed_embeddings_path):
900
+ try:
901
+ if TORCH_AVAILABLE:
902
+ self.precomputed_embeddings = torch.load(precomputed_embeddings_path, map_location='cpu', weights_only=True)
903
+ print(f"✅ Loaded {len(self.precomputed_embeddings)} pre-computed text embeddings")
904
+ else:
905
+ print("⚠️ PyTorch not available, skipping pre-computed embeddings")
906
+ except Exception as e:
907
+ print(f"⚠️ Warning: Could not load pre-computed embeddings: {e}")
908
+ elif precomputed_embeddings_path:
909
+ print(f"⚠️ Warning: Pre-computed embeddings file not found: {precomputed_embeddings_path}")
910
+
911
+ # Load the paired data
912
+ self.paired_data = load_jsonl_file(paired_data_path)
913
+
914
+ # Generate sequential training pairs from all methods
915
+ self.sequential_pairs = []
916
+ self._generate_all_sequential_pairs()
917
+
918
+ print(f"Loaded {len(self.paired_data)} methods from {paired_data_path}")
919
+ print(f"Generated {len(self.sequential_pairs)} sequential training pairs")
920
+
921
+ def _generate_all_sequential_pairs(self):
922
+ """Generate sequential training pairs from all ASTs in the dataset."""
923
+ for sample in self.paired_data:
924
+ try:
925
+ # Get text description
926
+ descriptions = sample.get('descriptions', [])
927
+ if descriptions:
928
+ description = random.choice(descriptions)
929
+ text_description = description['text']
930
+ else:
931
+ # Fallback to method name if no descriptions available
932
+ text_description = sample.get('method_name', 'unknown_method')
933
+
934
+ # Create sequential pairs for this AST
935
+ sequential_pairs = self._create_sequential_pairs(
936
+ sample['ast_json'],
937
+ text_description
938
+ )
939
+
940
+ # Add to global list
941
+ self.sequential_pairs.extend(sequential_pairs)
942
+
943
+ except Exception as e:
944
+ # Skip malformed samples gracefully
945
+ print(f"Warning: Skipping sample {sample.get('id', 'unknown')} due to error: {e}")
946
+ continue
947
+
948
+ def _create_sequential_pairs(self, ast_json: str, text_description: str) -> List[Dict[str, Any]]:
949
+ """
950
+ Convert single AST into sequence of (partial_graph, target_node) pairs.
951
+
952
+ Args:
953
+ ast_json: JSON string representing the AST
954
+ text_description: Text description for this method
955
+
956
+ Returns:
957
+ List of sequential training pairs
958
+ """
959
+ pairs = []
960
+
961
+ try:
962
+ # Extract nodes in proper order along with their connections
963
+ nodes, connections = self._extract_nodes_and_connections_in_order(ast_json)
964
+
965
+ # Limit sequence length if needed
966
+ if len(nodes) > self.max_sequence_length:
967
+ nodes = nodes[:self.max_sequence_length]
968
+ # Also limit connections to only include those within the sequence
969
+ filtered_connections = []
970
+ for src, tgt in connections:
971
+ if src < self.max_sequence_length and tgt < self.max_sequence_length:
972
+ filtered_connections.append((src, tgt))
973
+ connections = filtered_connections
974
+
975
+ # Get pre-computed text embedding if available, otherwise store text
976
+ text_embedding = None
977
+ if text_description in self.precomputed_embeddings:
978
+ text_embedding = self.precomputed_embeddings[text_description]
979
+
980
+ # Create sequential pairs
981
+ for i in range(len(nodes)):
982
+ # Build partial graph with nodes 0 to i-1
983
+ partial_graph = self._build_partial_graph(nodes[:i])
984
+
985
+ # Target is the i-th node
986
+ target_node = nodes[i]
987
+
988
+ # Create target connections for this step
989
+ # This represents which existing nodes (0 to i-1) the new node i should connect to
990
+ target_connections = self._create_target_connections(i, connections)
991
+
992
+ pair = {
993
+ 'text_description': text_description,
994
+ 'text_embedding': text_embedding, # Pre-computed embedding if available
995
+ 'partial_graph': partial_graph,
996
+ 'target_node': target_node,
997
+ 'target_connections': target_connections,
998
+ 'step': i,
999
+ 'total_steps': len(nodes)
1000
+ }
1001
+
1002
+ pairs.append(pair)
1003
+
1004
+ except Exception as e:
1005
+ # Return empty list for malformed ASTs
1006
+ print(f"Warning: Failed to create sequential pairs: {e}")
1007
+
1008
+ return pairs
1009
+
1010
+ def _extract_nodes_and_connections_in_order(self, ast_json: str) -> Tuple[List[Dict[str, Any]], List[Tuple[int, int]]]:
1011
+ """
1012
+ Extract nodes and their connections from AST in proper depth-first order.
1013
+
1014
+ Args:
1015
+ ast_json: JSON string representing the AST
1016
+
1017
+ Returns:
1018
+ Tuple of (nodes_list, connections_list) where connections are (parent_idx, child_idx) pairs
1019
+ """
1020
+ try:
1021
+ ast_data = json.loads(ast_json)
1022
+ nodes = []
1023
+ connections = []
1024
+ self._traverse_ast_nodes_with_connections(ast_data, nodes, connections, parent_idx=None)
1025
+ return nodes, connections
1026
+ except (json.JSONDecodeError, Exception):
1027
+ # Return empty lists for malformed JSON
1028
+ return [], []
1029
+
1030
+ def _traverse_ast_nodes_with_connections(self, node: Union[Dict, List, str, int, float, None],
1031
+ nodes: List[Dict[str, Any]],
1032
+ connections: List[Tuple[int, int]],
1033
+ parent_idx: Optional[int] = None):
1034
+ """
1035
+ Recursively traverse AST and collect nodes and connections in depth-first order.
1036
+
1037
+ Args:
1038
+ node: Current AST node
1039
+ nodes: List to collect nodes
1040
+ connections: List to collect connections as (parent_idx, child_idx) pairs
1041
+ parent_idx: Index of parent node
1042
+ """
1043
+ if isinstance(node, dict) and 'type' in node:
1044
+ # This is an AST node with a type
1045
+ current_idx = len(nodes)
1046
+ node_info = {
1047
+ 'node_type': node['type'],
1048
+ 'features': self.converter.node_encoder.create_node_features(node['type']),
1049
+ 'raw_node': node # Keep reference for debugging
1050
+ }
1051
+ nodes.append(node_info)
1052
+
1053
+ # Add connection from parent to current node
1054
+ if parent_idx is not None:
1055
+ connections.append((parent_idx, current_idx))
1056
+
1057
+ # Traverse children
1058
+ if 'children' in node:
1059
+ for child in node['children']:
1060
+ self._traverse_ast_nodes_with_connections(child, nodes, connections, current_idx)
1061
+
1062
+ elif isinstance(node, list):
1063
+ # Process list of nodes
1064
+ for child in node:
1065
+ self._traverse_ast_nodes_with_connections(child, nodes, connections, parent_idx)
1066
+
1067
+ def _create_target_connections(self, node_idx: int, all_connections: List[Tuple[int, int]]) -> List[float]:
1068
+ """
1069
+ Create target connection vector for a specific node being added.
1070
+
1071
+ Args:
1072
+ node_idx: Index of the node being added to the graph
1073
+ all_connections: List of all connections in the full AST as (parent_idx, child_idx) pairs
1074
+
1075
+ Returns:
1076
+ Binary vector of length max_nodes indicating which existing nodes to connect to
1077
+ """
1078
+ # Initialize with zeros for all possible connections
1079
+ target_vector = [0.0] * 100 # max_nodes = 100 from model
1080
+
1081
+ # Find all connections where this node is the target (child)
1082
+ # We want to know which existing nodes (with index < node_idx) should connect to this node
1083
+ for parent_idx, child_idx in all_connections:
1084
+ if child_idx == node_idx and parent_idx < node_idx and parent_idx < 100:
1085
+ target_vector[parent_idx] = 1.0
1086
+
1087
+ return target_vector
1088
+
1089
+ def _traverse_ast_nodes(self, node: Union[Dict, List, str, int, float, None], nodes: List[Dict[str, Any]]):
1090
+ """
1091
+ Recursively traverse AST and collect nodes in depth-first order.
1092
+
1093
+ Args:
1094
+ node: Current AST node
1095
+ nodes: List to collect nodes
1096
+ """
1097
+ if isinstance(node, dict) and 'type' in node:
1098
+ # This is an AST node with a type
1099
+ node_info = {
1100
+ 'node_type': node['type'],
1101
+ 'features': self.converter.node_encoder.create_node_features(node['type']),
1102
+ 'raw_node': node # Keep reference for debugging
1103
+ }
1104
+ nodes.append(node_info)
1105
+
1106
+ # Traverse children
1107
+ if 'children' in node:
1108
+ for child in node['children']:
1109
+ self._traverse_ast_nodes(child, nodes)
1110
+
1111
+ elif isinstance(node, list):
1112
+ # Process list of nodes
1113
+ for child in node:
1114
+ self._traverse_ast_nodes(child, nodes)
1115
+
1116
+ def _build_partial_graph(self, nodes: List[Dict[str, Any]]) -> Dict[str, Any]:
1117
+ """
1118
+ Build partial graph from first i nodes.
1119
+
1120
+ Args:
1121
+ nodes: List of nodes to include in partial graph
1122
+
1123
+ Returns:
1124
+ Partial graph representation
1125
+ """
1126
+ if not nodes:
1127
+ # Empty graph case
1128
+ return {
1129
+ 'x': [],
1130
+ 'edge_index': [[], []],
1131
+ 'num_nodes': 0
1132
+ }
1133
+
1134
+ # Extract node features
1135
+ node_features = [node['features'] for node in nodes]
1136
+
1137
+ # Create simple sequential connections (each node connects to next)
1138
+ # This is a simplified approach - in practice you'd want to preserve
1139
+ # the actual AST structure relationships
1140
+ edge_list = []
1141
+ for i in range(len(nodes) - 1):
1142
+ edge_list.append([i, i + 1]) # Forward edge
1143
+ edge_list.append([i + 1, i]) # Backward edge for undirected
1144
+
1145
+ if edge_list:
1146
+ edge_index = [[], []]
1147
+ for source, target in edge_list:
1148
+ edge_index[0].append(source)
1149
+ edge_index[1].append(target)
1150
+ else:
1151
+ edge_index = [[], []]
1152
+
1153
+ return {
1154
+ 'x': node_features,
1155
+ 'edge_index': edge_index,
1156
+ 'num_nodes': len(nodes)
1157
+ }
1158
+
1159
+ def __len__(self) -> int:
1160
+ """Return the number of sequential training pairs."""
1161
+ return len(self.sequential_pairs)
1162
+
1163
+ def __getitem__(self, idx: int) -> Dict[str, Any]:
1164
+ """
1165
+ Get a sequential training pair.
1166
+
1167
+ Args:
1168
+ idx: Index of the training pair
1169
+
1170
+ Returns:
1171
+ Dictionary containing partial graph and target node data
1172
+ """
1173
+ if idx < 0 or idx >= len(self.sequential_pairs):
1174
+ raise IndexError(f"Index {idx} out of range for dataset of size {len(self.sequential_pairs)}")
1175
+
1176
+ return self.sequential_pairs[idx]
1177
+
1178
+ def get_feature_dim(self) -> int:
1179
+ """Return the dimension of node features."""
1180
+ return self.converter.node_encoder.vocab_size
1181
+
1182
+
1183
+ def collate_autoregressive_data(batch: List[Dict[str, Any]]) -> Dict[str, Any]:
1184
+ """
1185
+ Collate function for batching autoregressive training data.
1186
+
1187
+ Args:
1188
+ batch: List of sequential training pairs
1189
+
1190
+ Returns:
1191
+ Batched autoregressive training data
1192
+ """
1193
+ if not batch:
1194
+ raise ValueError("Cannot collate empty batch")
1195
+
1196
+ # Separate different components
1197
+ text_descriptions = [item['text_description'] for item in batch]
1198
+ text_embeddings = [item.get('text_embedding') for item in batch]
1199
+ steps = [item['step'] for item in batch]
1200
+ total_steps = [item['total_steps'] for item in batch]
1201
+
1202
+ # Collate partial graphs
1203
+ partial_graphs = [item['partial_graph'] for item in batch]
1204
+
1205
+ # Collate node features from partial graphs
1206
+ all_x = []
1207
+ all_edge_index = [[], []]
1208
+ batch_idx = []
1209
+ node_offset = 0
1210
+
1211
+ for i, graph in enumerate(partial_graphs):
1212
+ # Node features
1213
+ if graph['x']:
1214
+ all_x.extend(graph['x'])
1215
+
1216
+ # Edge indices (offset by current node count)
1217
+ edges = graph['edge_index']
1218
+ if len(edges[0]) > 0:
1219
+ for j in range(len(edges[0])):
1220
+ all_edge_index[0].append(edges[0][j] + node_offset)
1221
+ all_edge_index[1].append(edges[1][j] + node_offset)
1222
+
1223
+ # Batch indices for each node
1224
+ num_nodes = graph['num_nodes']
1225
+ batch_idx.extend([i] * num_nodes)
1226
+ node_offset += num_nodes
1227
+
1228
+ # Target nodes and connections
1229
+ target_nodes = [item['target_node'] for item in batch]
1230
+ target_node_types = [node['node_type'] for node in target_nodes]
1231
+ target_node_features = [node['features'] for node in target_nodes]
1232
+ target_connections = [item['target_connections'] for item in batch]
1233
+
1234
+ return {
1235
+ 'text_descriptions': text_descriptions,
1236
+ 'text_embeddings': text_embeddings, # Can contain None values if not pre-computed
1237
+ 'partial_graphs': {
1238
+ 'x': all_x,
1239
+ 'edge_index': all_edge_index,
1240
+ 'batch': batch_idx,
1241
+ 'num_graphs': len(batch)
1242
+ },
1243
+ 'target_node_types': target_node_types,
1244
+ 'target_node_features': target_node_features,
1245
+ 'target_connections': target_connections,
1246
+ 'steps': steps,
1247
+ 'total_steps': total_steps
1248
+ }
1249
+
1250
+
1251
+ class AutoregressiveDataLoader:
1252
+ """
1253
+ DataLoader for autoregressive AST training data.
1254
+ """
1255
+
1256
+ def __init__(self, dataset: AutoregressiveASTDataset, batch_size: int = 8, shuffle: bool = True):
1257
+ """
1258
+ Initialize the AutoregressiveDataLoader.
1259
+
1260
+ Args:
1261
+ dataset: AutoregressiveASTDataset to load from
1262
+ batch_size: Number of sequential pairs per batch
1263
+ shuffle: Whether to shuffle the data
1264
+ """
1265
+ self.dataset = dataset
1266
+ self.batch_size = batch_size
1267
+ self.shuffle = shuffle
1268
+
1269
+ # Create indices
1270
+ self.indices = list(range(len(dataset)))
1271
+ if shuffle:
1272
+ random.shuffle(self.indices)
1273
+
1274
+ def __len__(self) -> int:
1275
+ """Return number of batches."""
1276
+ return (len(self.dataset) + self.batch_size - 1) // self.batch_size
1277
+
1278
+ def __iter__(self):
1279
+ """Iterate over batches."""
1280
+ for i in range(0, len(self.dataset), self.batch_size):
1281
+ batch_indices = self.indices[i:i + self.batch_size]
1282
+ batch = [self.dataset[idx] for idx in batch_indices]
1283
+ yield collate_autoregressive_data(batch)
1284
+
1285
+
1286
+ def create_autoregressive_data_loader(paired_data_path: str, batch_size: int = 8, shuffle: bool = True,
1287
+ max_sequence_length: int = 50, seed: Optional[int] = None,
1288
+ precomputed_embeddings_path: Optional[str] = None,
1289
+ num_workers: Optional[int] = None, pin_memory: bool = True):
1290
+ """
1291
+ Create data loader for autoregressive AST training.
1292
+
1293
+ Args:
1294
+ paired_data_path: Path to paired_data.jsonl file
1295
+ batch_size: Number of sequential pairs per batch
1296
+ shuffle: Whether to shuffle the data
1297
+ max_sequence_length: Maximum sequence length per method
1298
+ seed: Random seed for consistent description sampling
1299
+ precomputed_embeddings_path: Path to pre-computed text embeddings file
1300
+ num_workers: Number of worker processes for data loading (defaults to CPU count)
1301
+ pin_memory: Whether to use pinned memory for faster GPU transfer
1302
+
1303
+ Returns:
1304
+ DataLoader instance (PyTorch DataLoader if available, otherwise AutoregressiveDataLoader)
1305
+ """
1306
+ dataset = AutoregressiveASTDataset(
1307
+ paired_data_path,
1308
+ max_sequence_length=max_sequence_length,
1309
+ seed=seed,
1310
+ precomputed_embeddings_path=precomputed_embeddings_path
1311
+ )
1312
+
1313
+ # Use PyTorch DataLoader if available for better performance
1314
+ if TORCH_AVAILABLE:
1315
+ import os
1316
+ if num_workers is None:
1317
+ num_workers = os.cpu_count()
1318
+
1319
+ try:
1320
+ from torch.utils.data import DataLoader
1321
+
1322
+ # Create PyTorch DataLoader with optimizations
1323
+ loader = DataLoader(
1324
+ dataset,
1325
+ batch_size=batch_size,
1326
+ shuffle=shuffle,
1327
+ num_workers=num_workers,
1328
+ pin_memory=pin_memory and torch.cuda.is_available(),
1329
+ collate_fn=collate_autoregressive_data,
1330
+ persistent_workers=num_workers > 0, # Keep workers alive between epochs
1331
+ prefetch_factor=2 if num_workers > 0 else 2 # Prefetch batches
1332
+ )
1333
+
1334
+ print(f"✅ Using optimized PyTorch DataLoader with {num_workers} workers, pin_memory={pin_memory and torch.cuda.is_available()}")
1335
+ return loader
1336
+
1337
+ except Exception as e:
1338
+ print(f"⚠️ Warning: Could not create PyTorch DataLoader ({e}), falling back to custom loader")
1339
+
1340
+ # Fallback to custom loader
1341
+ loader = AutoregressiveDataLoader(dataset, batch_size=batch_size, shuffle=shuffle)
1342
+ print("ℹ️ Using custom AutoregressiveDataLoader")
1343
+
1344
+ return loader
1345
+
1346
+
1347
+ class HierarchicalASTDataset(RubyASTDataset):
1348
+ """
1349
+ Dataset for loading a single level of a hierarchical AST dataset.
1350
+
1351
+ This class inherits from RubyASTDataset to reuse the same AST-to-graph
1352
+ conversion logic. It is used to load one of the `_level_N.jsonl` files.
1353
+ """
1354
+ def __init__(self, jsonl_path: str, transform=None):
1355
+ """
1356
+ Initialize the dataset for a specific AST level.
1357
+
1358
+ Args:
1359
+ jsonl_path: Path to the JSONL file for a specific level.
1360
+ transform: Optional transform to apply to each sample.
1361
+ """
1362
+ super().__init__(jsonl_path, transform)
1363
+
1364
+
1365
+ def create_hierarchical_data_loader(dataset_path: str, batch_size: int, shuffle: bool, num_workers: Optional[int] = None):
1366
+ """
1367
+ Creates a data loader for a specific level of the hierarchical dataset.
1368
+
1369
+ Args:
1370
+ dataset_path: The full path to the `_level_N.jsonl` file.
1371
+ batch_size: The batch size for the data loader.
1372
+ shuffle: Whether to shuffle the data.
1373
+ num_workers: The number of worker processes for data loading.
1374
+
1375
+ Returns:
1376
+ A DataLoader instance for the specified dataset level.
1377
+ """
1378
+ dataset = HierarchicalASTDataset(dataset_path)
1379
+
1380
+ if TORCH_AVAILABLE:
1381
+ try:
1382
+ from torch_geometric.loader import DataLoader
1383
+ if num_workers is None:
1384
+ num_workers = os.cpu_count()
1385
+
1386
+ loader = DataLoader(
1387
+ dataset,
1388
+ batch_size=batch_size,
1389
+ shuffle=shuffle,
1390
+ num_workers=num_workers,
1391
+ pin_memory=torch.cuda.is_available(),
1392
+ persistent_workers=num_workers > 0,
1393
+ collate_fn=collate_graphs # Reusing the existing collate function
1394
+ )
1395
+ logging.info(f"Created PyG DataLoader for {dataset_path} with {num_workers} workers.")
1396
+ return loader
1397
+ except ImportError:
1398
+ logging.warning("PyTorch Geometric not found. Falling back to SimpleDataLoader.")
1399
+
1400
+ # Fallback to SimpleDataLoader
1401
+ return SimpleDataLoader(dataset, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_graphs)
1402
+
1403
+
1404
+ class HierarchicalPairedDataset(PairedDataset):
1405
+ """
1406
+ Dataset for loading a single level of a hierarchical dataset with paired text.
1407
+
1408
+ This class inherits from PairedDataset to reuse the same logic for
1409
+ processing graph data and randomly sampling text descriptions.
1410
+ """
1411
+ def __init__(self, jsonl_path: str, transform=None, seed: Optional[int] = None, limit: Optional[int] = None):
1412
+ """
1413
+ Initialize the dataset for a specific AST level.
1414
+
1415
+ Args:
1416
+ jsonl_path: Path to the JSONL file for a specific level (e.g., train_paired_data_level_0.jsonl).
1417
+ transform: Optional transform to apply to each sample.
1418
+ seed: Random seed for consistent description sampling.
1419
+ limit: Optional maximum number of samples to load.
1420
+ """
1421
+ super().__init__(jsonl_path, transform, seed, limit)
1422
+
1423
+
1424
+ def create_hierarchical_paired_data_loader(dataset_path: str, batch_size: int, shuffle: bool, num_workers: Optional[int] = None, limit: Optional[int] = None):
1425
+ """
1426
+ Creates a data loader for a specific level of the hierarchical paired dataset.
1427
+
1428
+ Args:
1429
+ dataset_path: The full path to the `_level_N.jsonl` file.
1430
+ batch_size: The batch size for the data loader.
1431
+ shuffle: Whether to shuffle the data.
1432
+ num_workers: The number of worker processes for data loading.
1433
+ limit: Optional maximum number of samples to load.
1434
+
1435
+ Returns:
1436
+ A DataLoader instance for the specified dataset level.
1437
+ """
1438
+ dataset = HierarchicalPairedDataset(dataset_path, limit=limit)
1439
+
1440
+ if TORCH_AVAILABLE:
1441
+ try:
1442
+ from torch.utils.data import DataLoader
1443
+ if num_workers is None:
1444
+ num_workers = 0 # Disabled for now to prevent file handle exhaustion
1445
+
1446
+ loader = DataLoader(
1447
+ dataset,
1448
+ batch_size=batch_size,
1449
+ shuffle=shuffle,
1450
+ num_workers=num_workers,
1451
+ pin_memory=torch.cuda.is_available(),
1452
+ persistent_workers=num_workers > 0,
1453
+ collate_fn=collate_paired_data
1454
+ )
1455
+ logging.info(f"Created PyTorch DataLoader for {dataset_path} with {num_workers} workers.")
1456
+ return loader
1457
+ except (ImportError, Exception) as e:
1458
+ logging.warning(f"PyTorch DataLoader creation failed ({e}). Falling back to PairedDataLoader.")
1459
+
1460
+ # Fallback to custom PairedDataLoader
1461
+ return PairedDataLoader(dataset, batch_size=batch_size, shuffle=shuffle)