desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Theoretical bound of the coded length given a probability distribution. Args: c: The binary codes. Belong to {0, 1}. p: The probability of: P(code==+1) Returns: The average code length. Note: the average code length can be greater than 1 bit (e.g. when encoding the least likely symbol).'
def _Apply(self, c, p):
entropy = ((((1.0 - c) * tf.log((1.0 - p))) + (c * tf.log(p))) / (- math.log(2))) entropy = tf.reduce_mean(entropy) return entropy
'Creates an initializer. Args: dims: Dimension(s) index to compute standard deviation: 1.0 / sqrt(product(shape[dims])) **kwargs: Extra keyword arguments to pass to tf.truncated_normal.'
def __init__(self, dims=(0,), **kwargs):
if isinstance(dims, (int, long)): self._dims = [dims] else: self._dims = dims self._kwargs = kwargs
'Creates an initializer. Args: dims: Dimension(s) index to compute standard deviation: sqrt(scale / product(shape[dims])) scale: A constant scaling for the initialization used as sqrt(scale / product(shape[dims])). **kwargs: Extra keyword arguments to pass to tf.truncated_normal.'
def __init__(self, dims=(0,), scale=2.0, **kwargs):
if isinstance(dims, (int, long)): self._dims = [dims] else: self._dims = dims self._kwargs = kwargs self._scale = scale
'Always returns True.'
@property def initialized(self):
return True
'Initializes Bias block. |initializer| parameter have two special cases. 1. If initializer is None, then this block works as a PassThrough. 2. If initializer is a Bias class object, then tf.constant_initializer is used with the stored value. Args: initializer: An initializer for the bias variable. name: Name of this bl...
def __init__(self, initializer=Bias(0), name=None):
super(BiasAdd, self).__init__(name) with self._BlockScope(): if isinstance(initializer, Bias): self._initializer = tf.constant_initializer(value=initializer.value) else: self._initializer = initializer self._bias = None
'Initializes NN block. Args: depth: The depth of the output. bias: An initializer for the bias, or a Bias class object. If None, there will be no bias term for this NN block. See BiasAdd block. act: Optional activation function. If None, no activation is applied. initializer: The initialization method for the matrix we...
def __init__(self, depth, bias=Bias(0), act=None, initializer=block_util.RsqrtInitializer(), linear_block_factory=(lambda d, i: Linear(d, initializer=i)), name=None):
super(NN, self).__init__(name) with self._BlockScope(): self._linear_block_factory = linear_block_factory self._depth = depth self._initializer = initializer self._matrices = None self._bias = (BiasAdd(bias) if bias else PassThrough()) self._act = (act if act else...
'Initializes a Conv2DBase block. Arguments: depth: The output depth of the block (i.e. #filters); if negative, the output depth will be set to be the same as the input depth. filter_size: The size of the 2D filter. If it\'s specified as an integer, it\'s going to create a square filter. Otherwise, this is a tuple speci...
def __init__(self, depth, filter_size, strides, padding, bias=None, act=None, atrous_rate=None, conv=tf.nn.conv2d, name=None):
super(Conv2DBase, self).__init__(name) with self._BlockScope(): self._act = (act if act else PassThrough()) self._bias = (BiasAdd(bias) if bias else PassThrough()) self._kernel_shape = np.zeros((4,), dtype=np.int32) self._kernel_shape[:2] = filter_size self._kernel_shape[...
'Apply the self._conv op. Arguments: x: input tensor. It needs to be a 4D tensor of the form [batch, height, width, channels]. Returns: The output of the convolution of x with the current convolutional kernel. Raises: ValueError: if number of channels is not defined at graph construction.'
def _Apply(self, x):
input_shape = x.get_shape().with_rank(4) input_shape[3:].assert_is_fully_defined() if (self._kernel is None): assert (self._kernel_shape[2] == 0), self._kernel_shape self._kernel_shape[2] = input_shape[3].value if (self._kernel_shape[3] < 0): self._kernel_shape[3] = self....
'Initializes a Conv2D block. Arguments: depth: The output depth of the block (i.e., #filters) filter_size: The size of the 2D filter. If it\'s specified as an integer, it\'s going to create a square filter. Otherwise, this is a tuple specifying the height x width of the filter. strides: A tuple specifying the y and x s...
def __init__(self, depth, filter_size, strides, padding, bias=None, act=None, initializer=None, name=None):
super(Conv2D, self).__init__(depth, filter_size, strides, padding, bias, act, conv=tf.nn.conv2d, name=name) with self._BlockScope(): if (initializer is None): initializer = block_util.RsqrtInitializer(dims=(0, 1, 2)) self._initializer = initializer
'Initializes LSTMBase class object. Args: output_shape: List representing the LSTM output shape. This argument does not include batch dimension. For example, if the LSTM output has shape [batch, depth], then pass [depth]. name: Name of this block.'
def __init__(self, output_shape, name):
super(LSTMBase, self).__init__(name) with self._BlockScope(): self._output_shape = ([None] + list(output_shape)) self._hidden = None self._cell = None
'Returns the hidden units of this LSTM.'
@property def hidden(self):
return self._hidden
'Assigns to the hidden units of this LSTM. Args: value: The new value for the hidden units. If None, the hidden units are considered to be filled with zeros.'
@hidden.setter def hidden(self, value):
if (value is not None): value.get_shape().assert_is_compatible_with(self._output_shape) self._hidden = value
'Returns the cell units of this LSTM.'
@property def cell(self):
return self._cell
'Assigns to the cell units of this LSTM. Args: value: The new value for the cell units. If None, the cell units are considered to be filled with zeros.'
@cell.setter def cell(self, value):
if (value is not None): value.get_shape().assert_is_compatible_with(self._output_shape) self._cell = value
'Transforms the input units to (4 * depth) units. The forget-gate, input-gate, output-gate, and cell update is computed as f, i, j, o = T(h) + R(x) where h is hidden units, x is input units, and T, R are transforms of h, x, respectively. This method implements R. Note that T is strictly linear, so if LSTM is going to u...
def _TransformInputs(self, _):
raise NotImplementedError()
'Transforms the hidden units to (4 * depth) units. The forget-gate, input-gate, output-gate, and cell update is computed as f, i, j, o = T(h) + R(x) where h is hidden units, x is input units, and T, R are transforms of h, x, respectively. This method implements T in the equation. The method must implement a strictly li...
def _TransformHidden(self, _):
raise NotImplementedError()
'Initialization of the composition operator. Args: block_list: List of blocks.BlockBase that are chained to create a new blocks.BlockBase. name: Name of this block.'
def __init__(self, block_list, name=None):
super(CompositionOperator, self).__init__(name) self._blocks = block_list
'Apply successively all the blocks on the given input tensor.'
def _Apply(self, x):
h = x for layer in self._blocks: h = layer(h) return h
'Initialization of the parallel exec + concat (Tower). Args: block_list: List of blocks.BlockBase that are chained to create a new blocks.BlockBase. dim: the dimension on which to concat. name: Name of this block.'
def __init__(self, block_list, dim=3, name=None):
super(TowerOperator, self).__init__(name) self._blocks = block_list self._concat_dim = dim
'Apply successively all the blocks on the given input tensor.'
def _Apply(self, x):
outputs = [layer(x) for layer in self._blocks] return tf.concat(outputs, self._concat_dim)
'Computes the loss used for PTN paper (projection + volume loss).'
def get_loss(self, inputs, outputs):
g_loss = tf.zeros(dtype=tf.float32, shape=[]) if self._params.proj_weight: g_loss += losses.add_volume_proj_loss(inputs, outputs, self._params.step_size, self._params.proj_weight) if self._params.volume_weight: g_loss += losses.add_volume_loss(inputs, outputs, 1, self._params.volume_weight) ...
'Aggregate the metrics for voxel generation model. Args: inputs: Input dictionary of the voxel generation model. outputs: Output dictionary returned by the voxel generation model. Returns: names_to_values: metrics->values (dict). names_to_updates: metrics->ops (dict).'
def get_metrics(self, inputs, outputs):
names_to_values = dict() names_to_updates = dict() (tmp_values, tmp_updates) = metrics.add_volume_iou_metrics(inputs, outputs) names_to_values.update(tmp_values) names_to_updates.update(tmp_updates) for (name, value) in names_to_values.iteritems(): slim.summaries.add_scalar_summary(value...
'Function called by TF to save the prediction periodically.'
def write_disk_grid(self, global_step, log_dir, input_images, gt_projs, pred_projs, input_voxels=None, output_voxels=None):
summary_freq = self._params.save_every def write_grid(input_images, gt_projs, pred_projs, global_step, input_voxels, output_voxels): 'Native python function to call for writing images to files.' grid = _build_image_grid(input_images, gt_projs, pred_projs, input_voxels=...
'Get the 4x4 Perspective Transfromation matrix used for PTN.'
def get_transform_matrix(self, view_out):
num_views = self._params.num_views focal_length = self._params.focal_length focal_range = self._params.focal_range phi = 30 theta_interval = (360.0 / num_views) theta = (theta_interval * view_out) camera_matrix = np.zeros((4, 4), dtype=np.float32) intrinsic_matrix = np.eye(4, dtype=np.fl...
'Gets dictionaries from metrics to value `Tensors` & update `Tensors`.'
@abc.abstractmethod def get_metrics(self, inputs, outputs):
pass
'Loads data for a specified dataset and split.'
def get_inputs(self, dataset_dir, dataset_name, split_name, batch_size, image_size, vox_size, is_training=True):
del image_size, vox_size with tf.variable_scope(('data_loading_%s/%s' % (dataset_name, split_name))): common_queue_min = 64 common_queue_capacity = 256 num_readers = 4 inputs = input_generator.get(dataset_dir, dataset_name, split_name, shuffle=is_training, num_readers=num_readers...
'Selects the subset of viewpoints to train on.'
def preprocess(self, raw_inputs, step_size):
(quantity, num_views) = raw_inputs['images'].get_shape().as_list()[:2] inputs = dict() inputs['voxels'] = raw_inputs['voxels'] for k in xrange(step_size): inputs[('images_%d' % (k + 1))] = [] inputs[('matrix_%d' % (k + 1))] = [] for n in xrange(quantity): selected_views = np....
'Initialization assignment operator function used while training.'
def get_init_fn(self, scopes):
if (not self._params.init_model): return None is_trainable = (lambda x: (x in tf.trainable_variables())) var_list = [] for scope in scopes: var_list.extend(filter(is_trainable, tf.contrib.framework.get_model_variables(scope))) (init_assign_op, init_feed_dict) = slim.assign_from_check...
'Train operation function for the given scope used file training.'
def get_train_op_for_scope(self, loss, optimizer, scopes):
is_trainable = (lambda x: (x in tf.trainable_variables())) var_list = [] update_ops = [] for scope in scopes: var_list.extend(filter(is_trainable, tf.contrib.framework.get_model_variables(scope))) update_ops.extend(tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope)) return slim.learni...
'Function called by TF to save the prediction periodically.'
def write_disk_grid(self, global_step, log_dir, input_images, gt_projs, pred_projs, pred_voxels=None):
summary_freq = self._params.save_every def write_grid(input_images, gt_projs, pred_projs, pred_voxels, global_step): 'Native python function to call for writing images to files.' grid = _build_image_grid(input_images, gt_projs, pred_projs, pred_voxels) if ((glo...
'Round robin the gpu device. (Reserve last gpu for expensive op).'
def _next_device(self):
if (self._num_gpus == 0): return '' dev = ('/gpu:%d' % self._cur_gpu) if (self._num_gpus > 1): self._cur_gpu = ((self._cur_gpu + 1) % (self._num_gpus - 1)) return dev
'Inputs to be fed to the graph.'
def _add_placeholders(self):
hps = self._hps self._articles = tf.placeholder(tf.int32, [hps.batch_size, hps.enc_timesteps], name='articles') self._abstracts = tf.placeholder(tf.int32, [hps.batch_size, hps.dec_timesteps], name='abstracts') self._targets = tf.placeholder(tf.int32, [hps.batch_size, hps.dec_timesteps], name='targets') ...
'Sets self._train_op, op to run for training.'
def _add_train_op(self):
hps = self._hps self._lr_rate = tf.maximum(hps.min_lr, tf.train.exponential_decay(hps.lr, self.global_step, 30000, 0.98)) tvars = tf.trainable_variables() with tf.device(self._get_gpu((self._num_gpus - 1))): (grads, global_norm) = tf.clip_by_global_norm(tf.gradients(self._loss, tvars), hps.max_g...
'Return the top states from encoder for decoder. Args: sess: tensorflow session. enc_inputs: encoder inputs of shape [batch_size, enc_timesteps]. enc_len: encoder input length of shape [batch_size] Returns: enc_top_states: The top level encoder states. dec_in_state: The decoder layer initial state.'
def encode_top_state(self, sess, enc_inputs, enc_len):
results = sess.run([self._enc_top_states, self._dec_in_state], feed_dict={self._articles: enc_inputs, self._article_lens: enc_len}) return (results[0], results[1][0])
'Return the topK results and new decoder states.'
def decode_topk(self, sess, latest_tokens, enc_top_states, dec_init_states):
feed = {self._enc_top_states: enc_top_states, self._dec_in_state: np.squeeze(np.array(dec_init_states)), self._abstracts: np.transpose(np.array([latest_tokens])), self._abstract_lens: np.ones([len(dec_init_states)], np.int32)} results = sess.run([self._topk_ids, self._topk_log_probs, self._dec_out_state], feed_...
'Hypothesis constructor. Args: tokens: start tokens for decoding. log_prob: log prob of the start tokens, usually 1. state: decoder initial states.'
def __init__(self, tokens, log_prob, state):
self.tokens = tokens self.log_prob = log_prob self.state = state
'Extend the hypothesis with result from latest step. Args: token: latest token from decoding. log_prob: log prob of the latest decoded tokens. new_state: decoder output state. Fed to the decoder for next step. Returns: New Hypothesis with the results from latest step.'
def Extend(self, token, log_prob, new_state):
return Hypothesis((self.tokens + [token]), (self.log_prob + log_prob), new_state)
'Creates BeamSearch object. Args: model: Seq2SeqAttentionModel. beam_size: int. start_token: int, id of the token to start decoding with end_token: int, id of the token that completes an hypothesis max_steps: int, upper limit on the size of the hypothesis'
def __init__(self, model, beam_size, start_token, end_token, max_steps):
self._model = model self._beam_size = beam_size self._start_token = start_token self._end_token = end_token self._max_steps = max_steps
'Performs beam search for decoding. Args: sess: tf.Session, session enc_inputs: ndarray of shape (enc_length, 1), the document ids to encode enc_seqlen: ndarray of shape (1), the length of the sequnce Returns: hyps: list of Hypothesis, the best hypotheses found by beam search, ordered by score'
def BeamSearch(self, sess, enc_inputs, enc_seqlen):
(enc_top_states, dec_in_state) = self._model.encode_top_state(sess, enc_inputs, enc_seqlen) hyps = ([Hypothesis([self._start_token], 0.0, dec_in_state)] * self._beam_size) results = [] steps = 0 while ((steps < self._max_steps) and (len(results) < self._beam_size)): latest_tokens = [h.latest...
'Sort the hyps based on log probs and length. Args: hyps: A list of hypothesis. Returns: hyps: A list of sorted hypothesis in reverse log_prob order.'
def _BestHyps(self, hyps):
if FLAGS.normalize_by_length: return sorted(hyps, key=(lambda h: (h.log_prob / len(h.tokens))), reverse=True) else: return sorted(hyps, key=(lambda h: h.log_prob), reverse=True)
'Batcher constructor. Args: data_path: tf.Example filepattern. vocab: Vocabulary. hps: Seq2SeqAttention model hyperparameters. article_key: article feature key in tf.Example. abstract_key: abstract feature key in tf.Example. max_article_sentences: Max number of sentences used from article. max_abstract_sentences: Max n...
def __init__(self, data_path, vocab, hps, article_key, abstract_key, max_article_sentences, max_abstract_sentences, bucketing=True, truncate_input=False):
self._data_path = data_path self._vocab = vocab self._hps = hps self._article_key = article_key self._abstract_key = abstract_key self._max_article_sentences = max_article_sentences self._max_abstract_sentences = max_abstract_sentences self._bucketing = bucketing self._truncate_input...
'Returns a batch of inputs for seq2seq attention model. Returns: enc_batch: A batch of encoder inputs [batch_size, hps.enc_timestamps]. dec_batch: A batch of decoder inputs [batch_size, hps.dec_timestamps]. target_batch: A batch of targets [batch_size, hps.dec_timestamps]. enc_input_len: encoder input lengths of the ba...
def NextBatch(self):
enc_batch = np.zeros((self._hps.batch_size, self._hps.enc_timesteps), dtype=np.int32) enc_input_lens = np.zeros(self._hps.batch_size, dtype=np.int32) dec_batch = np.zeros((self._hps.batch_size, self._hps.dec_timesteps), dtype=np.int32) dec_output_lens = np.zeros(self._hps.batch_size, dtype=np.int32) ...
'Fill input queue with ModelInput.'
def _FillInputQueue(self):
start_id = self._vocab.WordToId(data.SENTENCE_START) end_id = self._vocab.WordToId(data.SENTENCE_END) pad_id = self._vocab.WordToId(data.PAD_TOKEN) input_gen = self._TextGenerator(data.ExampleGen(self._data_path)) while True: (article, abstract) = six.next(input_gen) article_sentence...
'Fill bucketed batches into the bucket_input_queue.'
def _FillBucketInputQueue(self):
while True: inputs = [] for _ in xrange((self._hps.batch_size * BUCKET_CACHE_BATCH)): inputs.append(self._input_queue.get()) if self._bucketing: inputs = sorted(inputs, key=(lambda inp: inp.enc_len)) batches = [] for i in xrange(0, len(inputs), self._h...
'Watch the daemon input threads and restart if dead.'
def _WatchThreads(self):
while True: time.sleep(60) input_threads = [] for t in self._input_threads: if t.is_alive(): input_threads.append(t) else: tf.logging.error('Found input thread dead.') new_t = Thread(target=self._FillInputQueue)...
'Generates article and abstract text from tf.Example.'
def _TextGenerator(self, example_gen):
while True: e = six.next(example_gen) try: article_text = self._GetExFeatureText(e, self._article_key) abstract_text = self._GetExFeatureText(e, self._abstract_key) except ValueError: tf.logging.error('Failed to get article or abstract fr...
'Extract text for a feature from td.Example. Args: ex: tf.Example. key: key of the feature to be extracted. Returns: feature: a feature text extracted.'
def _GetExFeatureText(self, ex, key):
return ex.features.feature[key].bytes_list.value[0]
'Writes the reference and decoded outputs to RKV files. Args: reference: The human (correct) result. decode: The machine-generated result'
def Write(self, reference, decode):
self._ref_file.write(('output=%s\n' % reference)) self._decode_file.write(('output=%s\n' % decode)) self._cnt += 1 if ((self._cnt % DECODE_IO_FLUSH_INTERVAL) == 0): self._ref_file.flush() self._decode_file.flush()
'Resets the output files. Must be called once before Write().'
def ResetFiles(self):
if self._ref_file: self._ref_file.close() if self._decode_file: self._decode_file.close() timestamp = int(time.time()) self._ref_file = open(os.path.join(self._outdir, ('ref%d' % timestamp)), 'w') self._decode_file = open(os.path.join(self._outdir, ('decode%d' % timestamp)), 'w')
'Beam search decoding. Args: model: The seq2seq attentional model. batch_reader: The batch data reader. hps: Hyperparamters. vocab: Vocabulary'
def __init__(self, model, batch_reader, hps, vocab):
self._model = model self._model.build_graph() self._batch_reader = batch_reader self._hps = hps self._vocab = vocab self._saver = tf.train.Saver() self._decode_io = DecodeIO(FLAGS.decode_dir)
'Decoding loop for long running process.'
def DecodeLoop(self):
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) step = 0 while (step < FLAGS.max_decode_steps): time.sleep(DECODE_LOOP_DELAY_SECS) if (not self._Decode(self._saver, sess)): continue step += 1
'Restore a checkpoint and decode it. Args: saver: Tensorflow checkpoint saver. sess: Tensorflow session. Returns: If success, returns true, otherwise, false.'
def _Decode(self, saver, sess):
ckpt_state = tf.train.get_checkpoint_state(FLAGS.log_root) if (not (ckpt_state and ckpt_state.model_checkpoint_path)): tf.logging.info('No model to decode yet at %s', FLAGS.log_root) return False tf.logging.info('checkpoint path %s', ckpt_state.model_checkpoint_path) ...
'Convert id to words and writing results. Args: article: The original article string. abstract: The human (correct) abstract string. output_ids: The abstract word ids output by machine.'
def _DecodeBatch(self, article, abstract, output_ids):
decoded_output = ' '.join(data.Ids2Words(output_ids, self._vocab)) end_p = decoded_output.find(data.SENTENCE_END, 0) if (end_p != (-1)): decoded_output = decoded_output[:end_p] tf.logging.info('article: %s', article) tf.logging.info('abstract: %s', abstract) tf.logging.info(...
'Initialize vocabulary. Args: filename: Vocabulary file name.'
def __init__(self, filename):
self._id_to_word = [] self._word_to_id = {} self._unk = (-1) self._bos = (-1) self._eos = (-1) with tf.gfile.Open(filename) as f: idx = 0 for line in f: word_name = line.strip() if (word_name == '<S>'): self._bos = idx elif (wor...
'Convert a list of ids to a sentence, with space inserted.'
def decode(self, cur_ids):
return ' '.join([self.id_to_word(cur_id) for cur_id in cur_ids])
'Convert a sentence to a list of ids, with special tokens added.'
def encode(self, sentence):
word_ids = [self.word_to_id(cur_word) for cur_word in sentence.split()] return np.array((([self.bos] + word_ids) + [self.eos]), dtype=np.int32)
'Initialize LM1BDataset reader. Args: filepattern: Dataset file pattern. vocab: Vocabulary.'
def __init__(self, filepattern, vocab):
self._vocab = vocab self._all_shards = tf.gfile.Glob(filepattern) tf.logging.info('Found %d shards at %s', len(self._all_shards), filepattern)
'Randomly select a file and read it.'
def _load_random_shard(self):
return self._load_shard(random.choice(self._all_shards))
'Read one file and convert to ids. Args: shard_name: file path. Returns: list of (id, char_id, global_word_id) tuples.'
def _load_shard(self, shard_name):
tf.logging.info('Loading data from: %s', shard_name) with tf.gfile.Open(shard_name) as f: sentences = f.readlines() chars_ids = [self.vocab.encode_chars(sentence) for sentence in sentences] ids = [self.vocab.encode(sentence) for sentence in sentences] global_word_ids = [] curren...
'Writes a Markdown-formatted version of this document to file `f`. Args: f: The output file.'
def write_markdown_to_file(self, f):
raise NotImplementedError('Document.WriteToFile')
'Creates a new Index. Args: module_to_name: Dictionary mapping modules to short names. members: Dictionary mapping member name to (fullname, member). filename_to_library_map: A list of (filename, Library) pairs. The order corresponds to the order in which the libraries appear in the index. path_prefix: Prefix to add to...
def __init__(self, module_to_name, members, filename_to_library_map, path_prefix):
self._module_to_name = module_to_name self._members = members self._filename_to_library_map = filename_to_library_map self._path_prefix = path_prefix
'Writes this index to file `f`. The output is formatted as an unordered list. Each list element contains the title of the library, followed by a list of symbols in that library hyperlinked to the corresponding anchor in that library. Args: f: The output file.'
def write_markdown_to_file(self, f):
print('---', file=f) print('---', file=f) print('<!-- This file is machine generated: DO NOT EDIT! -->', file=f) print('', file=f) print('# TensorFlow Python reference documentation', file=f) print('', file=f) fullname_f = (lambda name: self._members[na...
'Creates a new Library. Args: title: A human-readable title for the library. module: Module to pull high level docstring from (for table of contents, list of Ops to document, etc.). module_to_name: Dictionary mapping modules to short names. members: Dictionary mapping member name to (fullname, member). documented: Set ...
def __init__(self, title, module, module_to_name, members, documented, exclude_symbols=(), prefix=None):
self._title = title self._module = module self._module_to_name = module_to_name self._members = dict(members) self._exclude_symbols = frozenset(exclude_symbols) documented.update(exclude_symbols) self._documented = documented self._mentioned = set() self._prefix = (prefix or '')
'The human-readable title for this library.'
@property def title(self):
return self._title
'Set of names mentioned in this library.'
@property def mentioned(self):
return self._mentioned
'Set of excluded symbols.'
@property def exclude_symbols(self):
return self._exclude_symbols
'Returns True if this member should be included in the document.'
def _should_include_member(self, name, member):
if _always_drop_symbol_re.match(name): return False if (name in self._exclude_symbols): return False return True
'Returns the list of modules imported from `module`.'
def get_imported_modules(self, module):
for (name, member) in inspect.getmembers(module): if inspect.ismodule(member): (yield (name, member))
'Returns the list of class members to document in `cls`. This function filters the class member to ONLY return those defined by the class. It drops the inherited ones. Args: cls_name: Qualified name of `cls`. cls: An inspect object of type \'class\'. Yields: name, member tuples.'
def get_class_members(self, cls_name, cls):
for (name, member) in inspect.getmembers(cls): is_method = (inspect.ismethod(member) or inspect.isfunction(member)) if (not (is_method or isinstance(member, property))): continue if ((is_method and (member.__name__ == '__init__')) or self._should_include_member(name, member)): ...
'Given a function, returns a string representing its args.'
def _generate_signature_for_function(self, func):
args_list = [] argspec = inspect.getargspec(func) first_arg_with_default = (len((argspec.args or [])) - len((argspec.defaults or []))) for arg in argspec.args[:first_arg_with_default]: if (arg == 'self'): continue args_list.append(arg) if ((argspec.varargs == 'args') and ...
'Remove indenting. We follow Python\'s convention and remove the minimum indent of the lines after the first, see: https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation preserving relative indentation. Args: docstring: A docstring. Returns: A list of strings, one per line, with the minimum indent str...
def _remove_docstring_indent(self, docstring):
docstring = (docstring or '') lines = docstring.strip().split('\n') min_indent = len(docstring) for l in lines[1:]: l = l.rstrip() if l: i = 0 while ((i < len(l)) and (l[i] == ' ')): i += 1 if (i < min_indent): min_in...
'Formats the given `docstring` as Markdown and prints it to `f`.'
def _print_formatted_docstring(self, docstring, f):
lines = self._remove_docstring_indent(docstring) i = 0 def _at_start_of_section(): 'Returns the header if lines[i] is at start of a docstring section.' l = lines[i] match = _section_re.match(l) if (match and ((i + 1) < len(lines)) and lines[(i...
'Prints the given function to `f`.'
def _print_function(self, f, prefix, fullname, func):
heading = ((prefix + ' `') + fullname) if (not isinstance(func, property)): heading += self._generate_signature_for_function(func) heading += ('` {#%s}' % _get_anchor(self._module_to_name, fullname)) print(heading, file=f) print('', file=f) self._print_formatted_docstring(inspect.g...
'Print `member` to `f`.'
def _write_member_markdown_to_file(self, f, prefix, name, member):
if (inspect.isfunction(member) or inspect.ismethod(member) or isinstance(member, property)): print('- - -', file=f) print('', file=f) self._print_function(f, prefix, name, member) print('', file=f) elif inspect.isclass(member): print('- - -', file=f) p...
'Write the class doc to `f`. Args: f: File to write to. prefix: Prefix for names. cls: class object. name: name to use.'
def _write_class_markdown_to_file(self, f, name, cls):
methods = dict(self.get_class_members(name, cls)) num_methods = len(methods) try: self._write_docstring_markdown_to_file(f, '####', inspect.getdoc(cls), methods, {}) except ValueError as e: raise ValueError((str(e) + (' in class `%s`' % cls.__name__))) any_method_called_out ...
'Prints this library to file `f`. Args: f: File to write to. Returns: Dictionary of documented members.'
def write_markdown_to_file(self, f):
print('---', file=f) print('---', file=f) print('<!-- This file is machine generated: DO NOT EDIT! -->', file=f) print('', file=f) print('#', self._title, file=f) if self._prefix: print(self._prefix, file=f) print('[TOC]', file=f) print('', file=f) ...
'Writes the leftover members to `f`. Args: f: File to write to. catch_all: If true, document all missing symbols from any module. Otherwise, document missing symbols from just this module.'
def write_other_members(self, f, catch_all=False):
if catch_all: names = self._members.items() else: names = inspect.getmembers(self._module) leftovers = [] for (name, _) in names: if ((name in self._members) and (name not in self._documented)): leftovers.append(name) if leftovers: print(('%s: undocumen...
'Generate an error if there are leftover members.'
def assert_no_leftovers(self):
leftovers = [] for name in self._members.keys(): if ((name in self._members) and (name not in self._documented)): leftovers.append(name) if leftovers: raise RuntimeError(('%s: undocumented members: %s' % (self._title, ', '.join(leftovers))))
'Return a tensor that constructs adversarial examples for the given input. Generate uses tf.py_func in order to operate over tensors. :param x: (required) A tensor with the inputs. :param y: (optional) A tensor with the true labels for an untargeted attack. If None (and y_target is None) then use the original labels th...
def __init__(self, sess, model, batch_size, confidence, targeted, learning_rate, binary_search_steps, max_iterations, abort_early, initial_const, clip_min, clip_max, num_labels, shape):
self.sess = sess self.TARGETED = targeted self.LEARNING_RATE = learning_rate self.MAX_ITERATIONS = max_iterations self.BINARY_SEARCH_STEPS = binary_search_steps self.ABORT_EARLY = abort_early self.CONFIDENCE = confidence self.initial_const = initial_const self.batch_size = batch_size...
'Perform the L_2 attack on the given images for the given targets. If self.targeted is true, then the targets represents the target labels If self.targeted is false, then targets are the original class labels'
def attack(self, imgs, targets):
r = [] for i in range(0, len(imgs), self.batch_size): r.extend(self.attack_batch(imgs[i:(i + self.batch_size)], targets[i:(i + self.batch_size)])) return np.array(r)
'Run the attack on a batch of images and labels.'
def attack_batch(self, imgs, labs):
def compare(x, y): if (not isinstance(x, (float, int, np.int64))): x = np.copy(x) if self.TARGETED: x[y] -= self.CONFIDENCE else: x[y] += self.CONFIDENCE x = np.argmax(x) if self.TARGETED: return (x == y) ...
':param model: An instance of the Model class. :param back: The backend to use. Either \'tf\' (default) or \'th\'. :param sess: The tf session to run graphs in (use None for Theano)'
def __init__(self, model, back='tf', sess=None):
if (not ((back == 'tf') or (back == 'th'))): raise ValueError("Backend argument must either be 'tf' or 'th'.") if ((back == 'th') and (sess is not None)): raise Exception('A session should not be provided when using th.') if (not isinstance(model,...
'Generate the attack\'s symbolic graph for adversarial examples. This method should be overriden in any child class that implements an attack that is expressable symbolically. Otherwise, it will wrap the numerical implementation as a symbolic operator. :param x: The model\'s symbolic inputs. :param **kwargs: optional p...
def generate(self, x, **kwargs):
if (self.back == 'th'): raise NotImplementedError('Theano version not implemented.') error = 'Sub-classes must implement generate.' raise NotImplementedError(error)
'Generate adversarial examples and return them as a NumPy array. Sub-classes *should not* implement this method unless they must perform special handling of arguments. :param x_val: A NumPy array with the original inputs. :param **kwargs: optional parameters used by child classes. :return: A NumPy array holding the adv...
def generate_np(self, x_val, **kwargs):
if (self.back == 'th'): raise NotImplementedError('Theano version not implemented.') if (self.sess is None): raise ValueError('Cannot use `generate_np` when no `sess` was provided') fixed = dict(((k, v) for (k, v) in kwargs.items() if (k in self.structural_kwarg...
'Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. :param params: a dictionary of attack-specific parameters :return: True when parsing was successful'
def parse_params(self, params=None):
return True
'Create a FastGradientMethod instance.'
def __init__(self, model, back='tf', sess=None):
super(FastGradientMethod, self).__init__(model, back, sess) self.feedable_kwargs = {'eps': np.float32, 'y': np.float32, 'y_target': np.float32, 'clip_min': np.float32, 'clip_max': np.float32} self.structural_kwargs = ['ord'] if (not isinstance(self.model, Model)): self.model = CallableModelWrapp...
'Generate symbolic graph for adversarial examples and return. :param x: The model\'s symbolic inputs. :param eps: (optional float) attack step size (input variation) :param ord: (optional) Order of the norm (mimics NumPy). Possible values: np.inf, 1 or 2. :param y: (optional) A tensor with the model labels. Only provid...
def generate(self, x, **kwargs):
assert self.parse_params(**kwargs) if (self.back == 'tf'): from .attacks_tf import fgm else: from .attacks_th import fgm if (self.y is not None): y = self.y else: y = self.y_target return fgm(x, self.model.get_probs(x), y=y, eps=self.eps, ord=self.ord, clip_min=se...
'Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. Attack-specific parameters: :param eps: (optional float) attack step size (input variation) :param ord: (optional) Order of the norm (mimics NumPy). Possible values: np.inf, 1 or 2. :param y: (optional) A tensor wit...
def parse_params(self, eps=0.3, ord=np.inf, y=None, y_target=None, clip_min=None, clip_max=None, **kwargs):
self.eps = eps self.ord = ord self.y = y self.y_target = y_target self.clip_min = clip_min self.clip_max = clip_max if ((self.y is not None) and (self.y_target is not None)): raise ValueError('Must not set both y and y_target') if (self.ord not in [np.inf, int(1...
'Create a BasicIterativeMethod instance.'
def __init__(self, model, back='tf', sess=None):
super(BasicIterativeMethod, self).__init__(model, back, sess) self.feedable_kwargs = {'eps': np.float32, 'eps_iter': np.float32, 'y': np.float32, 'y_target': np.float32, 'clip_min': np.float32, 'clip_max': np.float32} self.structural_kwargs = ['ord', 'nb_iter'] if (not isinstance(self.model, Model)): ...
'Generate symbolic graph for adversarial examples and return. :param x: The model\'s symbolic inputs. :param eps: (required float) maximum distortion of adversarial example compared to original input :param eps_iter: (required float) step size for each attack iteration :param nb_iter: (required int) Number of attack it...
def generate(self, x, **kwargs):
import tensorflow as tf assert self.parse_params(**kwargs) eta = 0 model_preds = self.model.get_probs(x) preds_max = tf.reduce_max(model_preds, 1, keep_dims=True) if (self.y_target is not None): y = self.y_target targeted = True elif (self.y is not None): y = self.y ...
'Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. Attack-specific parameters: :param eps: (required float) maximum distortion of adversarial example compared to original input :param eps_iter: (required float) step size for each attack iteration :param nb_iter: (re...
def parse_params(self, eps=0.3, eps_iter=0.05, nb_iter=10, y=None, ord=np.inf, clip_min=None, clip_max=None, y_target=None, **kwargs):
self.eps = eps self.eps_iter = eps_iter self.nb_iter = nb_iter self.y = y self.y_target = y_target self.ord = ord self.clip_min = clip_min self.clip_max = clip_max if ((self.y is not None) and (self.y_target is not None)): raise ValueError('Must not set both y ...
'Create a SaliencyMapMethod instance.'
def __init__(self, model, back='tf', sess=None):
super(SaliencyMapMethod, self).__init__(model, back, sess) if (not isinstance(self.model, Model)): self.model = CallableModelWrapper(self.model, 'probs') if (self.back == 'th'): error = 'Theano version of SaliencyMapMethod not implemented.' raise NotImplementedError(er...
'Generate symbolic graph for adversarial examples and return. :param x: The model\'s symbolic inputs. :param theta: (optional float) Perturbation introduced to modified components (can be positive or negative) :param gamma: (optional float) Maximum percentage of perturbed features :param nb_classes: (optional int) Numb...
def generate(self, x, **kwargs):
import tensorflow as tf from .attacks_tf import jacobian_graph, jsma_batch assert self.parse_params(**kwargs) preds = self.model.get_probs(x) grads = jacobian_graph(preds, x, self.nb_classes) if (self.y_target is not None): def jsma_wrap(x_val, y_target): return jsma_batch(se...
'Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. Attack-specific parameters: :param theta: (optional float) Perturbation introduced to modified components (can be positive or negative) :param gamma: (optional float) Maximum percentage of perturbed features :param ...
def parse_params(self, theta=1.0, gamma=np.inf, nb_classes=10, clip_min=0.0, clip_max=1.0, y_target=None, **kwargs):
self.theta = theta self.gamma = gamma self.nb_classes = nb_classes self.clip_min = clip_min self.clip_max = clip_max self.y_target = y_target return True
'Generate symbolic graph for adversarial examples and return. :param x: The model\'s symbolic inputs. :param eps: (optional float ) the epsilon (input variation parameter) :param num_iterations: (optional) the number of iterations :param xi: (optional float) the finite difference parameter :param clip_min: (optional fl...
def generate(self, x, **kwargs):
assert self.parse_params(**kwargs) return vatm(self.model, x, self.model.get_logits(x), eps=self.eps, num_iterations=self.num_iterations, xi=self.xi, clip_min=self.clip_min, clip_max=self.clip_max)
'Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. Attack-specific parameters: :param eps: (optional float )the epsilon (input variation parameter) :param num_iterations: (optional) the number of iterations :param xi: (optional float) the finite difference parameter...
def parse_params(self, eps=2.0, num_iterations=1, xi=1e-06, clip_min=None, clip_max=None, **kwargs):
self.eps = eps self.num_iterations = num_iterations self.xi = xi self.clip_min = clip_min self.clip_max = clip_max return True
'Return a tensor that constructs adversarial examples for the given input. Generate uses tf.py_func in order to operate over tensors. :param x: (required) A tensor with the inputs. :param y: (optional) A tensor with the true labels for an untargeted attack. If None (and y_target is None) then use the original labels th...
def generate(self, x, **kwargs):
import tensorflow as tf from .attacks_tf import CarliniWagnerL2 as CWL2 self.parse_params(**kwargs) attack = CWL2(self.sess, self.model, self.batch_size, self.confidence, ('y_target' in kwargs), self.learning_rate, self.binary_search_steps, self.max_iterations, self.abort_early, self.initial_const, self...
'Parameters data : str String with lines separated by \''
def __init__(self, data):
if isinstance(data, list): self._str = data else: self._str = data.split('\n') self.reset()
'func_name : Descriptive text continued text another_func_name : Descriptive text func_name1, func_name2, func_name3'
def _parse_see_also(self, content):
functions = [] current_func = None rest = [] for line in content: if (not line.strip()): continue if (':' in line): if current_func: functions.append((current_func, rest)) r = line.split(':', 1) current_func = r[0].strip() ...
'.. index: default :refguide: something, else, and more'
def _parse_index(self, section, content):
def strip_each_in(lst): return [s.strip() for s in lst] out = {} section = section.split('::') if (len(section) > 1): out['default'] = strip_each_in(section[1].split(','))[0] for line in content: line = line.split(':') if (len(line) > 2): out[line[1]] = st...