hash stringlengths 40 40 | diff stringlengths 131 26.7k | message stringlengths 7 694 | project stringlengths 5 67 | split stringclasses 1 value | diff_languages stringlengths 2 24 |
|---|---|---|---|---|---|
93c120dfa3d78a49e8a20b38339eb7d4781add53 | diff --git a/lib/ox/element.rb b/lib/ox/element.rb
index <HASH>..<HASH> 100644
--- a/lib/ox/element.rb
+++ b/lib/ox/element.rb
@@ -68,7 +68,6 @@ module Ox
# - +other+ [Object] Object compare _self_ to.
# *return* [Boolean] true if both Objects are equivalent, otherwise false.
def eql?(other)
- return false if (other.nil? or self.class != other.class)
return false unless super(other)
return false unless self.attributes == other.attributes
return false unless self.nodes == other.nodes
diff --git a/lib/ox/instruct.rb b/lib/ox/instruct.rb
index <HASH>..<HASH> 100644
--- a/lib/ox/instruct.rb
+++ b/lib/ox/instruct.rb
@@ -25,7 +25,6 @@ module Ox
# - +other+ [Object] Object compare _self_ to.
# *return* [Boolean] true if both Objects are equivalent, otherwise false.
def eql?(other)
- return false if (other.nil? or self.class != other.class)
return false unless super(other)
return false unless self.attributes == other.attributes
return false unless self.content == other.content | Removed unnecessary lines
- Element and Instruct both inherit from Node. They then overwride
Node#eql? to include additional behavior, but still call super for the
basic check, which makes the lines I removed be redundant. | ohler55_ox | train | rb,rb |
015b6c2cb2639872db1ccfe10b9ae15aad5e4b5b | diff --git a/blaze-common-utils/src/main/java/com/blazebit/reflection/PropertyPathExpression.java b/blaze-common-utils/src/main/java/com/blazebit/reflection/PropertyPathExpression.java
index <HASH>..<HASH> 100644
--- a/blaze-common-utils/src/main/java/com/blazebit/reflection/PropertyPathExpression.java
+++ b/blaze-common-utils/src/main/java/com/blazebit/reflection/PropertyPathExpression.java
@@ -164,7 +164,7 @@ public class PropertyPathExpression<X, Y> implements ValueAccessor<X, Y> {
return null;
}
- if(!source.isInstance(target)){
+ if(target != null && !source.isInstance(target)){
throw new IllegalArgumentException("Given target is not instance of the source class");
} | make static calls for first operation possible in property path expressions | Blazebit_blaze-utils | train | java |
b926f23c6b2db437e2c7ef15311a2fbfa803ccb4 | diff --git a/jbpm-bpmn2-emfextmodel/src/test/java/org/jbpm/bpmn2/emfextmodel/BPMN2EmfExtTest.java b/jbpm-bpmn2-emfextmodel/src/test/java/org/jbpm/bpmn2/emfextmodel/BPMN2EmfExtTest.java
index <HASH>..<HASH> 100644
--- a/jbpm-bpmn2-emfextmodel/src/test/java/org/jbpm/bpmn2/emfextmodel/BPMN2EmfExtTest.java
+++ b/jbpm-bpmn2-emfextmodel/src/test/java/org/jbpm/bpmn2/emfextmodel/BPMN2EmfExtTest.java
@@ -182,7 +182,7 @@ public class BPMN2EmfExtTest extends TestCase {
outResource.load(is, options);
DocumentRoot outRoot = (DocumentRoot) outResource.getContents().get(0);
- assertNotNull(outRoot.getImport());
+ assertNotNull(outRoot.getGlobal());
GlobalType globalType = outRoot.getGlobal();
assertEquals("identifier", globalType.getIdentifier());
assertEquals("type", globalType.getType()); | fix for emfext test | kiegroup_jbpm | train | java |
731e295b8a4167f86e7870b2f0d97a189fc07ec1 | diff --git a/plugins/jobs/web_client/js/views/JobDetailsWidget.js b/plugins/jobs/web_client/js/views/JobDetailsWidget.js
index <HASH>..<HASH> 100644
--- a/plugins/jobs/web_client/js/views/JobDetailsWidget.js
+++ b/plugins/jobs/web_client/js/views/JobDetailsWidget.js
@@ -1,6 +1,10 @@
girder.views.jobs_JobDetailsWidget = girder.View.extend({
initialize: function (settings) {
this.job = settings.job;
+
+ if (settings.renderImmediate) {
+ this.render();
+ }
},
render: function () {
@@ -13,3 +17,14 @@ girder.views.jobs_JobDetailsWidget = girder.View.extend({
return this;
}
});
+
+girder.router.route('job/:id', 'jobView', function (id) {
+ var job = new girder.models.JobModel({_id: id}).once('g:fetched', function () {
+ girder.events.trigger('g:navigateTo', girder.views.jobs_JobDetailsWidget, {
+ job: job,
+ renderImmediate: true
+ });
+ }, this).once('g:error', function () {
+ girder.router.navigate('collections', {trigger: true});
+ }, this).fetch();
+}); | Add top level view in web client for job details | girder_girder | train | js |
67db78d405c5b4732cf486cf468a78492d4b13cb | diff --git a/launcher.js b/launcher.js
index <HASH>..<HASH> 100644
--- a/launcher.js
+++ b/launcher.js
@@ -269,8 +269,8 @@ else if ('string' === typeof opts.ssl) {
if (!options.ssl) return;
}
-if (program['default']) {
- options.defaultChannel = program['default'];
+if (opts['default']) {
+ options.defaultChannel = opts['default'];
}
if (opts.port) { | Fixed 'default' option in launcher for Commander 7 | nodeGame_nodegame | train | js |
1177c2e4980b3840f25d1bd5d4229d990e9ce175 | diff --git a/app/assets/javascripts/pageflow/editor/views/edit_page_view.js b/app/assets/javascripts/pageflow/editor/views/edit_page_view.js
index <HASH>..<HASH> 100644
--- a/app/assets/javascripts/pageflow/editor/views/edit_page_view.js
+++ b/app/assets/javascripts/pageflow/editor/views/edit_page_view.js
@@ -32,7 +32,8 @@ pageflow.EditPageView = Backbone.Marionette.Layout.extend({
pictogramClass: 'type_pictogram',
helpLinkClicked: function(value) {
- pageflow.app.trigger('toggle-help', pageflow.Page.typesByName[value].help_entry_translation_key);
+ var pageType = pageflow.editor.pageTypes.findByName(value);
+ pageflow.app.trigger('toggle-help', pageType.seed.help_entry_translation_key);
}
})); | Use new Page Type API to Display Page Type Help | codevise_pageflow | train | js |
2887492582c7e84b852024656b72bcd3e2129283 | diff --git a/anyvcs/hg.py b/anyvcs/hg.py
index <HASH>..<HASH> 100644
--- a/anyvcs/hg.py
+++ b/anyvcs/hg.py
@@ -230,7 +230,10 @@ class HgRepo(VCSRepo):
except KeyError:
pass
if lookup:
- p = type(self).cleanPath(path + '/' + name)
+ if name:
+ p = type(self).cleanPath(path + '/' + name)
+ else:
+ p = path
lookup_commit[p] = (entry, objid)
results.append(entry) | fix cache miss in HgRepo.ls(report=['commit']) for non-directories
Close issue #<I> | ScottDuckworth_python-anyvcs | train | py |
a2e84b44cfbfc354871a02dbaad11df4697ce109 | diff --git a/lib/deep_cover/coverage/persistence.rb b/lib/deep_cover/coverage/persistence.rb
index <HASH>..<HASH> 100644
--- a/lib/deep_cover/coverage/persistence.rb
+++ b/lib/deep_cover/coverage/persistence.rb
@@ -1,7 +1,5 @@
# frozen_string_literal: true
-require 'deep_cover/version'
-
module DeepCover
require 'securerandom'
class Coverage::Persistence
diff --git a/lib/deep_cover/load.rb b/lib/deep_cover/load.rb
index <HASH>..<HASH> 100644
--- a/lib/deep_cover/load.rb
+++ b/lib/deep_cover/load.rb
@@ -13,6 +13,7 @@ module DeepCover
AUTOLOAD.each do |module_name|
DeepCover.autoload(Tools::Camelize.camelize(module_name), "#{__dir__}/#{module_name}")
end
+ DeepCover.autoload :VERSION, 'deep_cover/version'
Object.autoload :Term, 'term/ansicolor'
Object.autoload :Terminal, 'terminal-table'
require 'pry'
@@ -42,6 +43,7 @@ module DeepCover
AUTOLOAD.each do |module_name|
DeepCover.const_get(Tools::Camelize.camelize(module_name))
end
+ DeepCover::VERSION # rubocop:disable Lint/Void
@all_loaded = true
end
end | Load VERSION at the same place we load the rest of DeepCover | deep-cover_deep-cover | train | rb,rb |
35f91b56676cc7156d5902cca84538e44160afe0 | diff --git a/setup.py b/setup.py
index <HASH>..<HASH> 100644
--- a/setup.py
+++ b/setup.py
@@ -30,8 +30,13 @@ libsass_headers = [
]
if sys.platform == 'win32':
+ from distutils.msvc9compiler import get_build_version
+ vccmntools_env = 'VS{0}{1}COMNTOOLS'.format(
+ int(get_build_version()),
+ int(get_build_version() * 10) % 10
+ )
try:
- os.environ['VS90COMNTOOLS'] = os.environ['VS110COMNTOOLS']
+ os.environ[vccmntools_env] = os.environ['VS110COMNTOOLS']
except KeyError:
warnings.warn('You probably need Visual Studio 2012 (11.0) or higher')
# Workaround http://bugs.python.org/issue4431 under Python <= 2.6 | Fix finding vcvarsall.bat on Python <I>
(#7) | sass_libsass-python | train | py |
cc4901e072d593c71c71e5e8539ed2fe57a948f5 | diff --git a/tests/test_names.py b/tests/test_names.py
index <HASH>..<HASH> 100644
--- a/tests/test_names.py
+++ b/tests/test_names.py
@@ -4,6 +4,7 @@ import scrubadub
from base import BaseTestCase
+import scrubadub
class NameTestCase(unittest.TestCase, BaseTestCase):
@@ -28,3 +29,9 @@ class NameTestCase(unittest.TestCase, BaseTestCase):
AFTER: {{NAME}} is a friendly person
"""
self.compare_before_after()
+
+ def test_disallowed_nouns(self):
+ detector = scrubadub.detectors.NameDetector()
+ detector.disallowed_nouns = set()
+ with self.assertRaises(TypeError):
+ list(detector.iter_filth('John is a cat')) | test for invalid disallowed_nouns in name detector | datascopeanalytics_scrubadub | train | py |
54201daea9a0dd524b28a4bd064b3d719cdb0cb0 | diff --git a/src/brackets.js b/src/brackets.js
index <HASH>..<HASH> 100644
--- a/src/brackets.js
+++ b/src/brackets.js
@@ -33,6 +33,7 @@ define(function(require, exports, module) {
, FileCommandHandlers : FileCommandHandlers
, DocumentManager : DocumentManager
, Commands : Commands
+ , WorkingSetView : WorkingSetView
, CommandManager : require("CommandManager")
}; | refectoring working set tests | adobe_brackets | train | js |
c8e5e25d8e10c6235bd18fec2d3d44b56383c58a | diff --git a/manticore/platforms/evm.py b/manticore/platforms/evm.py
index <HASH>..<HASH> 100644
--- a/manticore/platforms/evm.py
+++ b/manticore/platforms/evm.py
@@ -69,7 +69,7 @@ def globalfakesha3(data):
consts.add(
"oog",
- default="complete",
+ default="ignore",
description=(
"Default behavior for symbolic gas."
"pedantic: Fully faithful. Test at every instruction. Forks." | Ignore Gas Calculations by Default (#<I>)
As I understand it, this is necessary to get the [building secure contracts](<URL>) repo passing by default. It's also a frequent first step for real-world EVM analysis, so perhaps it will save some time.
The tests are performing suspiciously well locally. I would have expected some changes to be required. Let's see how GH Actions handles them. | trailofbits_manticore | train | py |
15350cf5332f126f3d53cddcc61f0fe7d9541254 | diff --git a/src/system/modules/metamodels/config/config.php b/src/system/modules/metamodels/config/config.php
index <HASH>..<HASH> 100644
--- a/src/system/modules/metamodels/config/config.php
+++ b/src/system/modules/metamodels/config/config.php
@@ -44,6 +44,14 @@ if (version_compare(VERSION, '3.0', '<')/* && !class_exists('Composer\Autoload\C
if (file_exists($baseDir . DIRECTORY_SEPARATOR . $fileName))
{
require $baseDir . DIRECTORY_SEPARATOR . $fileName;
+
+ // Tell the Contao 2.X auto loader cache that the class is available.
+ if (class_exists('Cache'))
+ {
+ \Cache::getInstance()->{'classFileExists-' . $className} = true;
+ }
+ \FileCache::getInstance('classes')->$className = true;
+
return true;
}
} | Add classes to FileCache in Contao 2.X. | MetaModels_core | train | php |
ffca6dfe0162074cb9224a26aa667ac724be0ab8 | diff --git a/core/types/transaction_signing.go b/core/types/transaction_signing.go
index <HASH>..<HASH> 100644
--- a/core/types/transaction_signing.go
+++ b/core/types/transaction_signing.go
@@ -227,13 +227,13 @@ func recoverPlain(sighash common.Hash, R, S, Vb *big.Int, homestead bool) (commo
if !crypto.ValidateSignatureValues(V, R, S, homestead) {
return common.Address{}, ErrInvalidSig
}
- // encode the snature in uncompressed format
+ // encode the signature in uncompressed format
r, s := R.Bytes(), S.Bytes()
sig := make([]byte, 65)
copy(sig[32-len(r):32], r)
copy(sig[64-len(s):64], s)
sig[64] = V
- // recover the public key from the snature
+ // recover the public key from the signature
pub, err := crypto.Ecrecover(sighash[:], sig)
if err != nil {
return common.Address{}, err | core/types: fix typos (#<I>) | ethereum_go-ethereum | train | go |
9bf300d9c7d834d22a8683b80f0caad6f2982f0d | diff --git a/tests/cases/setup_database_test.py b/tests/cases/setup_database_test.py
index <HASH>..<HASH> 100644
--- a/tests/cases/setup_database_test.py
+++ b/tests/cases/setup_database_test.py
@@ -168,9 +168,8 @@ class SetupDatabaseTestCase(base.TestCase):
'creatorId': admin['_id']
}, item, 'emptyfile')
- # empty files only create items, not files
file = File().findOne({'itemId': item['_id']})
- self.assertEqual(file, None)
+ self.assertEqual(file['name'], 'emptyfile.txt')
item = Item().findOne(
{'name': 'file.txt', 'folderId': folder['_id']}) | Fix test that depended on buggy behavior | girder_girder | train | py |
8727988ee7738206ec8a9a1a04ba18b99eaf7f66 | diff --git a/astropy_helpers/sphinx/conf.py b/astropy_helpers/sphinx/conf.py
index <HASH>..<HASH> 100644
--- a/astropy_helpers/sphinx/conf.py
+++ b/astropy_helpers/sphinx/conf.py
@@ -57,15 +57,6 @@ intersphinx_mapping = {
'astropy': ('http://docs.astropy.org/en/stable/', None),
'h5py': ('http://docs.h5py.org/en/latest/', None)}
-if sys.version_info[0] == 2:
- intersphinx_mapping['python'] = (
- 'https://docs.python.org/2/',
- (None, 'http://data.astropy.org/intersphinx/python2.inv'))
- intersphinx_mapping['pythonloc'] = (
- 'http://docs.python.org/',
- path.abspath(path.join(path.dirname(__file__),
- 'local/python2_local_links.inv')))
-
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build'] | Removing py2 intersphinx from conf, too | astropy_astropy-helpers | train | py |
c8a1a0a6d716e0448e8e4f838bc89730726cea90 | diff --git a/src/AbstractSchema.php b/src/AbstractSchema.php
index <HASH>..<HASH> 100644
--- a/src/AbstractSchema.php
+++ b/src/AbstractSchema.php
@@ -74,31 +74,6 @@ abstract class AbstractSchema implements SchemaInterface
/**
*
- * Returns a list of tables in the database.
- *
- * @param string $schema Optionally, pass a schema name to get the list
- * of tables in this schema.
- *
- * @return string[] The list of table-names in the database.
- *
- */
- abstract public function fetchTableList($schema = null);
-
- /**
- *
- * Returns an array of columns in a table.
- *
- * @param string $spec Return the columns in this table. This may be just
- * a `table` name, or a `schema.table` name.
- *
- * @return Column[] An associative array where the key is the column name
- * and the value is a Column object.
- *
- */
- abstract public function fetchTableCols($spec);
-
- /**
- *
* Returns the column factory object.
*
* @return ColumnFactory | remove abstract methods to soothe <I>, at least on windows; fixes #<I> | auraphp_Aura.SqlSchema | train | php |
8e55adf4ecf35d1edb105b2817a42d144645d46c | diff --git a/server/render.js b/server/render.js
index <HASH>..<HASH> 100644
--- a/server/render.js
+++ b/server/render.js
@@ -78,17 +78,20 @@ async function doRender (req, res, pathname, query, {
let html
let head
let errorHtml = ''
+
try {
- html = render(app)
+ if (err && dev) {
+ errorHtml = render(createElement(ErrorDebug, { error: err }))
+ } else if (err) {
+ errorHtml = render(app)
+ } else {
+ html = render(app)
+ }
} finally {
head = Head.rewind() || defaultHead()
}
const chunks = loadChunks({ dev, dir, dist, availableChunks })
- if (err && dev) {
- errorHtml = render(createElement(ErrorDebug, { error: err }))
- }
-
return { html, head, errorHtml, chunks }
} | Render error as errorHtml (#<I>) | zeit_next.js | train | js |
7a1b3a39f95af76230d27c8a3ef1b6db67fcf4b9 | diff --git a/src/main/java/com/blackducksoftware/integration/hub/builder/HubServerConfigBuilder.java b/src/main/java/com/blackducksoftware/integration/hub/builder/HubServerConfigBuilder.java
index <HASH>..<HASH> 100644
--- a/src/main/java/com/blackducksoftware/integration/hub/builder/HubServerConfigBuilder.java
+++ b/src/main/java/com/blackducksoftware/integration/hub/builder/HubServerConfigBuilder.java
@@ -236,6 +236,10 @@ public class HubServerConfigBuilder extends AbstractBuilder<GlobalFieldKey, HubS
this.proxyPort = proxyPort;
}
+ public void setProxyPort(final String proxyPort) {
+ setProxyPort(stringToInteger(proxyPort));
+ }
+
public String getProxyUsername() {
return proxyUsername;
} | Adding in another set method for the proxy port | blackducksoftware_blackduck-common | train | java |
ce84c271b6a1fca093f8dc6b0134425ac45a7e5a | diff --git a/lib/uaa/token_coder.rb b/lib/uaa/token_coder.rb
index <HASH>..<HASH> 100644
--- a/lib/uaa/token_coder.rb
+++ b/lib/uaa/token_coder.rb
@@ -64,6 +64,7 @@ class TokenCoder
end
def self.decode(token, skey = nil, pkey = nil, verify = true)
+ pkey = OpenSSL::PKey::RSA.new(pkey) unless pkey.nil? || pkey.is_a?(OpenSSL::PKey::PKey)
segments = token.split('.')
raise DecodeError, "Not enough or too many segments" unless [2,3].include? segments.length
header_segment, payload_segment, crypto_segment = segments | fix raw decode api to also accept PEM format pub key.
Change-Id: I<I>fd6d3a<I>f<I>d9e3a<I>cc<I>e7adce | cloudfoundry_cf-uaa-lib | train | rb |
d2b7f9018fe3ea1ab3579f1cca7c742669ec1cd7 | diff --git a/src/app/Classes/TreeGenerator.php b/src/app/Classes/TreeGenerator.php
index <HASH>..<HASH> 100644
--- a/src/app/Classes/TreeGenerator.php
+++ b/src/app/Classes/TreeGenerator.php
@@ -17,12 +17,12 @@ class TreeGenerator
public function get()
{
- $this->generate();
+ $this->run();
return $this->tree;
}
- private function generate()
+ private function run()
{
$this->branches()
->each(function ($branch) {
@@ -43,7 +43,7 @@ class TreeGenerator
{
$reference = $this->tree;
- $this->branchNodes($branch)
+ $this->nodes($branch)
->each(function ($node, $index) use (&$reference, $branch) {
if (!$reference->has($node)) {
$reference->$node = $this->isEndingNode($index)
@@ -55,7 +55,7 @@ class TreeGenerator
});
}
- private function branchNodes($branch)
+ private function nodes($branch)
{
$branchNodes = collect(
explode('.', $branch->name) | renamed methods in treegenerator | laravel-enso_RoleManager | train | php |
788cf470b951dee322f1998ccc39176a03c6bbfc | diff --git a/src/OAuth/UserData/Extractor/GitHub.php b/src/OAuth/UserData/Extractor/GitHub.php
index <HASH>..<HASH> 100644
--- a/src/OAuth/UserData/Extractor/GitHub.php
+++ b/src/OAuth/UserData/Extractor/GitHub.php
@@ -40,9 +40,9 @@ class GitHub extends LazyExtractor
{
return json_decode(
$this->service->request(
- self::REQUEST_EMAIL,
- 'GET',
- array(),
+ self::REQUEST_EMAIL,
+ 'GET',
+ array(),
array('Accept' => 'application/vnd.github.v3')
),
true | Fixed code style for Scrutinizer (yep, again!) | Oryzone_PHPoAuthUserData | train | php |
2a2df3714797a273598e7be1f3f791d24529e59a | diff --git a/src/ocLazyLoad.loaders.requireJSLoader.js b/src/ocLazyLoad.loaders.requireJSLoader.js
index <HASH>..<HASH> 100644
--- a/src/ocLazyLoad.loaders.requireJSLoader.js
+++ b/src/ocLazyLoad.loaders.requireJSLoader.js
@@ -12,7 +12,7 @@
* because the user can overwrite jsLoader and it will probably not use promises :(
*/
$delegate.jsLoader = function (paths, callback, params) {
- require(paths, callback, callback, params);
+ require(paths, callback.bind(null, undefined), callback, params);
};
$delegate.jsLoader.requirejs = true; | Fix: Success callback for requirejs should pass undefine | ocombe_ocLazyLoad | train | js |
47a9253a839e7103f713055e72215b8d53cce09e | diff --git a/src/utils/overlayPositionUtils.js b/src/utils/overlayPositionUtils.js
index <HASH>..<HASH> 100644
--- a/src/utils/overlayPositionUtils.js
+++ b/src/utils/overlayPositionUtils.js
@@ -1,2 +1,2 @@
-export * from 'react-overlays/lib/utils/overlayPositionUtils';
+export * from 'react-overlays/lib/utils/calculatePosition'; | Update overlayPositionUtils.js
why are we re-exporting this? | react-bootstrap_react-bootstrap | train | js |
44ae78ed6251369b2a61b458e13427fad8a5e507 | diff --git a/glacier/treehash_test.go b/glacier/treehash_test.go
index <HASH>..<HASH> 100644
--- a/glacier/treehash_test.go
+++ b/glacier/treehash_test.go
@@ -151,6 +151,35 @@ func BenchmarkTreeHash(b *testing.B) {
th.Close()
}
+func BenchmarkTreeHashReset(b *testing.B) {
+ b.StopTimer()
+ data1 := make([]byte, 768)
+ for i := range data1 {
+ data1[i] = 'a'
+ }
+ data2 := make([]byte, 2034)
+ for i := range data2 {
+ data2[i] = 'a'
+ }
+ th := NewTreeHash()
+ b.StartTimer()
+
+ for i := 0; i < b.N; i++ {
+ n1, err := th.Write(data1)
+ if err != nil {
+ b.Fatal(err)
+ }
+ n2, err := th.Write(data2)
+ if err != nil {
+ b.Fatal(err)
+ }
+ th.Close()
+ th.Reset()
+ b.SetBytes(int64(n1 + n2))
+ }
+ th.Close()
+}
+
func BenchmarkTreeHashClose(b *testing.B) {
nodes := make([][sha256.Size]byte, 4)
for i := range nodes { | Add benchmark for resetting tree hash. | rdwilliamson_aws | train | go |
54f11b84d218c1a7ff4ab4e2148819265da213bc | diff --git a/daemon/logger/jsonfilelog/read.go b/daemon/logger/jsonfilelog/read.go
index <HASH>..<HASH> 100644
--- a/daemon/logger/jsonfilelog/read.go
+++ b/daemon/logger/jsonfilelog/read.go
@@ -77,6 +77,9 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
}
if !config.Follow {
+ if err := latestFile.Close(); err != nil {
+ logrus.Errorf("Error closing file: %v", err)
+ }
return
}
diff --git a/daemon/logs.go b/daemon/logs.go
index <HASH>..<HASH> 100644
--- a/daemon/logs.go
+++ b/daemon/logs.go
@@ -87,6 +87,13 @@ func (daemon *Daemon) ContainerLogs(ctx context.Context, containerName string, c
if !ok {
logrus.Debug("logs: end stream")
logs.Close()
+ if cLog != container.LogDriver {
+ // Since the logger isn't cached in the container, which occurs if it is running, it
+ // must get explicitly closed here to avoid leaking it and any file handles it has.
+ if err := cLog.Close(); err != nil {
+ logrus.Errorf("Error closing logger: %v", err)
+ }
+ }
return nil
}
logLine := msg.Line | Fixing file handle leak for "docker logs"
If "docker logs" was used on an offline container, the logger is leaked, leaving it up to the finalizer to close the file handle, which could block removal of the container. Further, the json file logger could leak an open handle if the logs are read without follow due to an early return without a close. This change addresses both cases. | moby_moby | train | go,go |
1983885acfccfe4ffa010401fd9ef0971bb6c12c | diff --git a/etcd3/__init__.py b/etcd3/__init__.py
index <HASH>..<HASH> 100644
--- a/etcd3/__init__.py
+++ b/etcd3/__init__.py
@@ -3,9 +3,11 @@ from __future__ import absolute_import
from etcd3.client import Etcd3Client
from etcd3.client import client
from etcd3.client import Transactions
+from etcd3.members import Member
__author__ = 'Louis Taylor'
__email__ = 'louis@kragniz.eu'
__version__ = '0.1.0'
-__all__ = ['Etcd3Client', 'client', 'etcdrpc', 'utils', 'Transactions']
+__all__ = ['Etcd3Client', 'client', 'etcdrpc', 'utils', 'Transactions',
+ 'Member'] | Make Member part of the public api | kragniz_python-etcd3 | train | py |
eb33b93f17da2266bd6c24294dccbf9e44978d37 | diff --git a/HardwareSource.py b/HardwareSource.py
index <HASH>..<HASH> 100644
--- a/HardwareSource.py
+++ b/HardwareSource.py
@@ -452,8 +452,8 @@ class HardwareSource(Storage.Broadcaster):
# these items are now live if we're playing right now. mark as such.
for data_item in new_channel_to_data_item_dict.values():
- data_item.begin_transaction()
data_item.increment_accessor_count()
+ data_item.begin_transaction()
# update the data items with the new data.
for channel in channels:
@@ -463,8 +463,12 @@ class HardwareSource(Storage.Broadcaster):
# these items are no longer live. mark live_data as False.
for channel, data_item in self.last_channel_to_data_item_dict.iteritems():
- data_item.decrement_accessor_count()
+ # the order of these two statements is important, at least for now (12/2013)
+ # when the transaction ends, the data will get written to disk, so we need to
+ # make sure it's still in memory. if decrement were to come before the end
+ # of the transaction, the data would be unloaded from memory, losing it forever.
data_item.end_transaction()
+ data_item.decrement_accessor_count()
# keep the channel to data item map around so that we know what changed between
# last iteration and this one. also handle reference counts. | Fix data unloading issue in hardware source.
svn r<I> | nion-software_nionswift | train | py |
0e560ba5d13b0d283a00c3f1d643b34314129974 | diff --git a/languagetool-language-modules/de/src/main/java/org/languagetool/rules/de/GermanSpellerRule.java b/languagetool-language-modules/de/src/main/java/org/languagetool/rules/de/GermanSpellerRule.java
index <HASH>..<HASH> 100644
--- a/languagetool-language-modules/de/src/main/java/org/languagetool/rules/de/GermanSpellerRule.java
+++ b/languagetool-language-modules/de/src/main/java/org/languagetool/rules/de/GermanSpellerRule.java
@@ -305,7 +305,7 @@ public class GermanSpellerRule extends CompoundAwareHunspellRule {
put("[aA]nn?[ou]ll?ie?rung", "Annullierung");
put("[sS]charm", "Charme");
put("[zZ]auberlich(e[mnrs]?)?", w -> Arrays.asList(w.replaceFirst("lich", "isch"), w.replaceFirst("lich", "haft")));
- putRepl("[Dd]rumrum", "rum", "herum");
+ putRepl("[Dd]rumrum", "rum$", "herum");
putRepl("([uU]n)?proff?esionn?ell?(e[mnrs]?)?", "proff?esionn?ell?", "professionell");
putRepl("[kK]inderlich(e[mnrs]?)?", "inder", "ind");
putRepl("[wW]iedersprichs?t", "ieder", "ider"); | [de] improve suggestion for "drumrum" | languagetool-org_languagetool | train | java |
787075847bebc9c074c8cf900e91181dad816292 | diff --git a/src/pandas_profiling/model/pandas/duplicates_pandas.py b/src/pandas_profiling/model/pandas/duplicates_pandas.py
index <HASH>..<HASH> 100644
--- a/src/pandas_profiling/model/pandas/duplicates_pandas.py
+++ b/src/pandas_profiling/model/pandas/duplicates_pandas.py
@@ -35,7 +35,7 @@ def pandas_get_duplicates(
duplicated_rows = df.duplicated(subset=supported_columns, keep=False)
duplicated_rows = (
df[duplicated_rows]
- .groupby(supported_columns)
+ .groupby(supported_columns, dropna=False)
.size()
.reset_index(name=duplicates_key)
) | fix: pandas groupby ignoring duplicates with nans | pandas-profiling_pandas-profiling | train | py |
73f6ce304ca67196e12b2b2e1a4ee877baaac6da | diff --git a/dependency-check-core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java b/dependency-check-core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java
index <HASH>..<HASH> 100644
--- a/dependency-check-core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java
+++ b/dependency-check-core/src/main/java/org/owasp/dependencycheck/analyzer/JarAnalyzer.java
@@ -169,7 +169,8 @@ public class JarAnalyzer extends AbstractFileTypeAnalyzer {
*/
public JarAnalyzer() {
try {
- final JAXBContext jaxbContext = JAXBContext.newInstance("org.owasp.dependencycheck.jaxb.pom.generated");
+ //final JAXBContext jaxbContext = JAXBContext.newInstance("org.owasp.dependencycheck.jaxb.pom.generated");
+ final JAXBContext jaxbContext = JAXBContext.newInstance(org.owasp.dependencycheck.jaxb.pom.generated.Model.class);
pomUnmarshaller = jaxbContext.createUnmarshaller();
} catch (JAXBException ex) { //guess we will just have a null pointer exception later...
LOGGER.log(Level.SEVERE, "Unable to load parser. See the log for more details."); | corrected jaxb newInstance
Former-commit-id: <I>a1b<I>ad1e<I>ae9bff<I>cca<I>c6faaad7 | jeremylong_DependencyCheck | train | java |
4e7d91034ca16cd18349976788d61c1144b762a6 | diff --git a/autoload.php b/autoload.php
index <HASH>..<HASH> 100644
--- a/autoload.php
+++ b/autoload.php
@@ -12,7 +12,7 @@
// Search for an autoloader
// in phar environment
-if (extension_loaded('phar') && file_exists($file = Phar::running() . '/vendor/autoload.php')) {
+if (extension_loaded('phar') && method_exists('Phar', 'running') && file_exists($file = Phar::running() . '/vendor/autoload.php')) {
$loader = require_once $file;
} elseif (file_exists($file = __DIR__ . '/../../autoload.php')) { | the running method does not exists on Phar in HHVM | liip_RMT | train | php |
507917748a10f81f1973bef90b2d6a43d319ef34 | diff --git a/command/agent/http.go b/command/agent/http.go
index <HASH>..<HASH> 100644
--- a/command/agent/http.go
+++ b/command/agent/http.go
@@ -9,6 +9,7 @@ import (
"net"
"net/http"
"net/http/pprof"
+ "net/url"
"os"
"strconv"
"strings"
@@ -283,9 +284,14 @@ func (s *HTTPServer) wrap(handler func(resp http.ResponseWriter, req *http.Reque
setHeaders(resp, s.agent.config.HTTPAPIResponseHeaders)
// Obfuscate any tokens from appearing in the logs
- req.ParseForm()
+ formVals, err := url.ParseQuery(req.URL.RawQuery)
+ if err != nil {
+ s.logger.Printf("[ERR] http: Failed to decode query: %s", err)
+ resp.WriteHeader(500)
+ return
+ }
logURL := req.URL.String()
- if tokens, ok := req.Form["token"]; ok {
+ if tokens, ok := formVals["token"]; ok {
for _, token := range tokens {
logURL = strings.Replace(logURL, token, "<hidden>", -1)
} | agent: parse raw query URL to avoid closing the request body early | hashicorp_consul | train | go |
d552b20f722c8dde8a49424de83235d6f3b80d95 | diff --git a/lib/spaceship/portal/portal_client.rb b/lib/spaceship/portal/portal_client.rb
index <HASH>..<HASH> 100644
--- a/lib/spaceship/portal/portal_client.rb
+++ b/lib/spaceship/portal/portal_client.rb
@@ -24,7 +24,7 @@ module Spaceship
results = headers['location'].match(/.*appIdKey=(\h+)/)
if (results || []).length > 1
api_key = results[1]
- File.write(cache_path, api_key)
+ File.write(cache_path, api_key) if api_key.length == 64
return api_key
else
raise "Could not find latest API Key from the Dev Portal - the server might be slow right now" | Caching of spaceship API key only if length is correct | fastlane_fastlane | train | rb |
d3b0b3da904a4487102232a1edf0f7b4c96e0aa3 | diff --git a/src/MultipleInput.php b/src/MultipleInput.php
index <HASH>..<HASH> 100644
--- a/src/MultipleInput.php
+++ b/src/MultipleInput.php
@@ -10,6 +10,7 @@ namespace unclead\widgets;
use Yii;
use yii\base\Model;
+use yii\helpers\ArrayHelper;
use yii\widgets\InputWidget;
use yii\db\ActiveRecordInterface;
use unclead\widgets\renderers\TableRenderer;
@@ -127,7 +128,10 @@ class MultipleInput extends InputWidget
}
if ($this->model instanceof Model) {
- foreach ((array)$this->model->{$this->attribute} as $index => $value) {
+ $data = $this->model->hasProperty($this->attribute)
+ ? ArrayHelper::getValue($this->model, $this->attribute, [])
+ : [];
+ foreach ((array) $data as $index => $value) {
$this->data[$index] = $value;
}
} | Get data from a model only if an attribute exists | unclead_yii2-multiple-input | train | php |
6c0af33c1333201dd5a664ad9e104e21050f1327 | diff --git a/spec/unit/virtus/attribute/class_methods/determine_type_spec.rb b/spec/unit/virtus/attribute/class_methods/determine_type_spec.rb
index <HASH>..<HASH> 100644
--- a/spec/unit/virtus/attribute/class_methods/determine_type_spec.rb
+++ b/spec/unit/virtus/attribute/class_methods/determine_type_spec.rb
@@ -3,7 +3,7 @@ require 'spec_helper'
describe Virtus::Attribute, '.determine_type' do
let(:object) { described_class }
- described_class.descendants.each do |attribute_class|
+ (described_class.descendants - [ Virtus::Attribute::Collection ]).each do |attribute_class|
context "with class #{attribute_class.inspect}" do
subject { object.determine_type(attribute_class) }
@@ -16,12 +16,6 @@ describe Virtus::Attribute, '.determine_type' do
context "with #{attribute_class.inspect} and primitive #{primitive.inspect}" do
subject { object.determine_type(primitive) }
- before do
- if [Virtus::Attribute::Collection].include? attribute_class
- pending
- end
- end
-
it 'returns the corresponding attribute class' do
should be(attribute_class)
end | Remove pending case from determine_type spec | solnic_virtus | train | rb |
e89b7f26430d8cd74056023a0bff6db995ddeb6e | diff --git a/admin/jqadm/src/Admin/JQAdm/Order/Standard.php b/admin/jqadm/src/Admin/JQAdm/Order/Standard.php
index <HASH>..<HASH> 100644
--- a/admin/jqadm/src/Admin/JQAdm/Order/Standard.php
+++ b/admin/jqadm/src/Admin/JQAdm/Order/Standard.php
@@ -239,10 +239,6 @@ class Standard
$search = $manager->createSearch();
$search->setSortations( [$search->sort( '-', 'order.id' )] );
$search = $this->initCriteria( $search, $params );
- $search->setConditions( $search->combine( '&&', [
- $search->compare( '=~', 'order.base.product.siteid', $this->getContext()->getLocale()->getSiteId() ),
- $search->getConditions()
- ] ) );
$view->items = $manager->searchItems( $search, [], $total );
$view->baseItems = $this->getOrderBaseItems( $view->items ); | Orders are restricted by the order manager now | aimeos_ai-admin-jqadm | train | php |
a7483cbc50a87ab71cf8c0e523b747ea28227caf | diff --git a/lib/kuniri/language/ruby/ruby_syntax.rb b/lib/kuniri/language/ruby/ruby_syntax.rb
index <HASH>..<HASH> 100644
--- a/lib/kuniri/language/ruby/ruby_syntax.rb
+++ b/lib/kuniri/language/ruby/ruby_syntax.rb
@@ -14,6 +14,7 @@ require_relative 'function_behavior_ruby'
require_relative 'attribute_ruby'
require_relative 'comment_ruby'
require_relative 'method_ruby'
+require_relative 'aggregation_ruby'
module Languages
@@ -36,6 +37,7 @@ module Languages
@conditionalHandler = Languages::Ruby::ConditionalRuby.new
@repetitionHandler = Languages::Ruby::RepetitionRuby.new
@commentHandler = Languages::Ruby::CommentRuby.new
+ @aggregationHandler = Languages::Ruby::AggregationRuby.new
@visibility = "public"
end | Add aggregation handler for ruby syntax | Kuniri_kuniri | train | rb |
1624563a5a18fda46472708635b45888f28e0396 | diff --git a/lib/bandit/storage/dalli.rb b/lib/bandit/storage/dalli.rb
index <HASH>..<HASH> 100644
--- a/lib/bandit/storage/dalli.rb
+++ b/lib/bandit/storage/dalli.rb
@@ -3,7 +3,8 @@ module Bandit
def initialize(config)
require 'dalli'
config[:namespace] ||= 'bandit'
- @dalli = Dalli::Client.new(config.fetch(:host, 'localhost:11211'), config)
+ server = "#{config[:host]}:#{config.fetch([:port], 11211)}" if config[:host]
+ @dalli = Dalli::Client.new(server, config)
end
# increment key by count | Make Dalli client server more flexible
Before it was only possible to use the host specified in bandit.yml (and
the port was ignored as well). Now, if the host isn't specified in the
config Dalli will properly fall back to using ENV["MEMCACHE_SERVERS"] or
<I>:<I>.
See: <URL> | bmuller_bandit | train | rb |
76f2a3b0cc4bec88e6ae8c150e687fd3653db2df | diff --git a/assets/js/reportico.js b/assets/js/reportico.js
index <HASH>..<HASH> 100755
--- a/assets/js/reportico.js
+++ b/assets/js/reportico.js
@@ -1050,8 +1050,14 @@ reportico_jquery(document).on('click', '.reportico-admin-button, .reportico-admi
*/
function ajaxFileDownload(url, data, expandpanel, reportico_container) {
- url += getYiiAjaxURL();
- url += "?reportico_padding=1";
+ ajaxextra = getYiiAjaxURL();
+ if ( ajaxextra != "" ) {
+ url += ajaxextra
+ url += "&reportico_padding=1";
+ }
+ else
+ url += "?reportico_padding=1";
+
url += getCSRFURLParams();
headers = getCSRFHeaders(); | Fixed download of PDF and CSV in Yii module | reportico-web_reportico | train | js |
be88d2fa69978b65b71f96ba1a975a08f638e4c2 | diff --git a/spyderlib/utils/programs.py b/spyderlib/utils/programs.py
index <HASH>..<HASH> 100644
--- a/spyderlib/utils/programs.py
+++ b/spyderlib/utils/programs.py
@@ -145,7 +145,7 @@ def run_python_script_in_terminal(fname, wdir, args, interact,
# Command line and cwd have to be converted to the filesystem
# encoding before passing them to subprocess
# See http://bugs.python.org/issue1759845#msg74142
- from spyderlib.encoding import to_fs_from_unicode
+ from spyderlib.utils.encoding import to_fs_from_unicode
subprocess.Popen(to_fs_from_unicode(cmd), shell=True,
cwd=to_fs_from_unicode(wdir))
elif os.name == 'posix': | Running a script in an external system terminal: feature was broken since
a bunch of changes that were made a while ago to fix unicode-related issues | spyder-ide_spyder | train | py |
e029cd68203f85bb5621459ad4c503416a16ae67 | diff --git a/lib/gman_client.rb b/lib/gman_client.rb
index <HASH>..<HASH> 100644
--- a/lib/gman_client.rb
+++ b/lib/gman_client.rb
@@ -3,7 +3,7 @@ require 'blanket'
# GmanClient
module GmanClient
- def self.receive_drivers
+ def self.drivers
request = Blanket.wrap("http://localhost:3000", :extension => :json)
respond = request.api.v1.drivers.get
respond.map(&:to_h)
diff --git a/spec/lib/driver_response_spec.rb b/spec/lib/driver_response_spec.rb
index <HASH>..<HASH> 100644
--- a/spec/lib/driver_response_spec.rb
+++ b/spec/lib/driver_response_spec.rb
@@ -8,7 +8,7 @@ require 'support/vcr'
RSpec.describe GmanClient do
describe '.receive_drivers' do
VCR.use_cassette('drivers') do
- response = GmanClient.receive_drivers
+ response = GmanClient.drivers
subject(:driver_response) { response }
let(:driver) { driver_response.first } | GmanClient.receive_drivers is nowGmanClient.drivers. | westernmilling_gman_client | train | rb,rb |
3b348bbf4de06581c5558220e7f5a2ae1e76c333 | diff --git a/mod/resource/type/file/resource.class.php b/mod/resource/type/file/resource.class.php
index <HASH>..<HASH> 100644
--- a/mod/resource/type/file/resource.class.php
+++ b/mod/resource/type/file/resource.class.php
@@ -574,7 +574,7 @@ class resource_file extends resource_base {
'<script type="text/javascript">'."\n".
'//<![CDATA['."\n".
'var FO = { movie:"'.$CFG->wwwroot.'/filter/mediaplugin/flvplayer.swf?file='.$cleanurl.'",'."\n".
- 'width:"600", height:"400", majorversion:"6", build:"40", allowscriptaccess:"never", quality: "high" };'."\n".
+ 'width:"600", height:"400", majorversion:"6", build:"40", allowscriptaccess:"never", allowfullscreen:"true", quality: "high" };'."\n".
'UFO.create(FO, "'.$id.'");'."\n".
'//]]>'."\n".
'</script>'."\n"; | MDL-<I> flv player in resources - allow full screen. Credit goes to Darren Jones. Merged from <I>_STABLE | moodle_moodle | train | php |
e71371d163bbfcb6424b2453797e4727f40ce0ee | diff --git a/Topic/TopicPeriodicTimer.php b/Topic/TopicPeriodicTimer.php
index <HASH>..<HASH> 100644
--- a/Topic/TopicPeriodicTimer.php
+++ b/Topic/TopicPeriodicTimer.php
@@ -114,7 +114,7 @@ class TopicPeriodicTimer implements \IteratorAggregate
{
$namespace = spl_object_hash($topic);
- if (isset($this->registry[$namespace][$name])) {
+ if (!isset($this->registry[$namespace][$name])) {
return;
} | Periodic timer events do not cancel | GeniusesOfSymfony_WebSocketBundle | train | php |
9498bb1dee96d850b8398be34f30bb97c2353c08 | diff --git a/lib/http/client.rb b/lib/http/client.rb
index <HASH>..<HASH> 100644
--- a/lib/http/client.rb
+++ b/lib/http/client.rb
@@ -135,6 +135,7 @@ module HTTP
end
# Moves uri get params into the opts.params hash
+ # @return [URI, Hash]
def normalize_get_params(uri, opts)
uri = URI(uri)
if uri.query | Get YARD-coverage back to passing | httprb_http | train | rb |
b64b2e988274a149c79b5fb7efbad7c436585906 | diff --git a/lib/WebRtcPeer.js b/lib/WebRtcPeer.js
index <HASH>..<HASH> 100644
--- a/lib/WebRtcPeer.js
+++ b/lib/WebRtcPeer.js
@@ -138,7 +138,7 @@ function WebRtcPeer(mode, options, callback) {
var pc = options.peerConnection
var sendSource = options.sendSource || 'webcam'
- var guid = uuid.v4
+ var guid = uuid.v4()
var configuration = recursive({
iceServers: freeice()
}, | [client] corrected typo in uuid.v4 invocation
Change-Id: I7cee<I>c<I>ad<I>dbdd<I>db2fd3f9bd<I>e<I>d | Kurento_kurento-utils-js | train | js |
f2910e0060f4055fc57ff6bd6f80c78b2551a126 | diff --git a/jira/client.py b/jira/client.py
index <HASH>..<HASH> 100755
--- a/jira/client.py
+++ b/jira/client.py
@@ -2122,11 +2122,11 @@ class JIRA(object):
def _create_kerberos_session(self):
verify = self._options['verify']
- from requests_kerberos import HTTPKerberosAuth
+ from requests_kerberos import HTTPKerberosAuth, OPTIONAL
self._session = ResilientSession()
self._session.verify = verify
- self._session.auth = HTTPKerberosAuth()
+ self._session.auth = HTTPKerberosAuth(mutual_authentication=OPTIONAL)
@staticmethod
def _timestamp(dt=None): | made kerberos mutual auth optional as not all servers support this | pycontribs_jira | train | py |
af1cbe35a10998dd7c204724a13887d37a66dc3b | diff --git a/lib/SlackBot.js b/lib/SlackBot.js
index <HASH>..<HASH> 100755
--- a/lib/SlackBot.js
+++ b/lib/SlackBot.js
@@ -526,6 +526,8 @@ function Slackbot(configuration) {
name: identity.team,
};
}
+
+ team.state = state;
var bot = slack_botkit.spawn(team); | Store state parameter from oauth against team
I display the 'Add to slack' button on my website and I would like to send the logged in user's id when the button is clicked. To do so I use the `state` parameter but it appears that parameter is currently never accessible. This fix works locally in my certain case, but as I'm new to botkit there may be issues with this implementation that I'm unaware of. | howdyai_botkit | train | js |
1543bb73c1d4b01a22a1d274725496322568daf7 | diff --git a/airflow/models/dag.py b/airflow/models/dag.py
index <HASH>..<HASH> 100644
--- a/airflow/models/dag.py
+++ b/airflow/models/dag.py
@@ -2315,9 +2315,9 @@ def dag(*dag_args, **dag_kwargs):
Accepts kwargs for operator kwarg. Can be used to parametrize DAGs.
:param dag_args: Arguments for DAG object
- :type dag_args: list
+ :type dag_args: Any
:param dag_kwargs: Kwargs for DAG object.
- :type dag_kwargs: dict
+ :type dag_kwargs: Any
"""
def wrapper(f: Callable): | Fixed type annotations in DAG decorator (#<I>) | apache_airflow | train | py |
db9aeb37631f1d168fc3f8016c550fbe9e2190d5 | diff --git a/kronos/__init__.py b/kronos/__init__.py
index <HASH>..<HASH> 100644
--- a/kronos/__init__.py
+++ b/kronos/__init__.py
@@ -14,6 +14,7 @@ from django.conf import settings
from kronos.utils import read_crontab, write_crontab, delete_crontab
from kronos.version import __version__
import six
+from django.utils.module_loading import autodiscover_modules
tasks = []
@@ -22,19 +23,12 @@ def load():
"""
Load ``cron`` modules for applications listed in ``INSTALLED_APPS``.
"""
- paths = ['%s.cron' % PROJECT_MODULE.__name__]
+ autodiscover_modules('cron')
if '.' in PROJECT_MODULE.__name__:
- paths.append('%s.cron' % '.'.join(
- PROJECT_MODULE.__name__.split('.')[0:-1]))
-
- for application in settings.INSTALLED_APPS:
- paths.append('%s.cron' % application)
-
- # load kronostasks
- for p in paths:
try:
- import_module(p)
+ import_module('%s.cron' % '.'.join(
+ PROJECT_MODULE.__name__.split('.')[0:-1]))
except ImportError as e:
if 'No module named' not in str(e):
print(e) | Use the autodiscover_modules to load modules with cron.py
Before it was trying to import .cron from all apps from INSTALLED_APPS,
but this missed the new AppConfig apps, which are in INSTALLED_APPS as
objects and not modules. Django uses autodiscover_modules internally
for models.py (for example), so it should work on all expected situations.
I did left the previous PROJECT_MODULE.__name__ as I dont fully
understand why it is there. | jgorset_django-kronos | train | py |
a5750ccf134687420715826659675d1db4817728 | diff --git a/index.js b/index.js
index <HASH>..<HASH> 100755
--- a/index.js
+++ b/index.js
@@ -1,5 +1,5 @@
#! /usr/bin/env node
-
+'use strict'
var fs = require('fs')
var path = require('path')
var os = require('osenv')
@@ -86,12 +86,14 @@ var inject = module.exports = function (cache, config) {
//get from the hash if it's already been downloaded!
//else download from the registry or the url.
-
- var key = pkg.name + '@' + pkg.version
+
+ //actually, it's better to use the url as the key
+ //so that the cache downloads the correct tarball.
+ //this would all be so much simpler if npm just
+ //used the hashes as the name of the tarball.
+
var query = {
- name: pkg.name, version: pkg.version,
- key: key,
- tarball: pkg.tarball, hash: pkg.shasum
+ key: pkg.tarball, hash: pkg.shasum
}
return cache.createStream(query, function (err, stream) { | key the tarball with the tarball url | dominictarr_npmd-install | train | js |
d4a1e27239a3b0e849636e918fa4ca7de6e98327 | diff --git a/lib/Image.js b/lib/Image.js
index <HASH>..<HASH> 100644
--- a/lib/Image.js
+++ b/lib/Image.js
@@ -78,6 +78,18 @@ var Image = function (_Component) {
key: 'thumbnailStyle',
value: function thumbnailStyle() {
if (this.props.thumbnailStyle) return this.props.thumbnailStyle.call(this);
+ var transformValue = undefined;
+ switch (this.props.item.orientation) {
+ case 3:
+ transformValue = "rotate(180deg)";
+ break;
+ case 6:
+ transformValue = "rotate(90deg)";
+ break;
+ case 8:
+ transformValue = "rotate(270deg)";
+ break;
+ }
if (this.props.item.isSelected) {
var ratio = this.props.item.scaletwidth / this.props.height;
var height = 0;
@@ -100,7 +112,8 @@ var Image = function (_Component) {
width: width,
height: height,
marginLeft: marginLeft,
- marginTop: marginTop
+ marginTop: marginTop,
+ transform: transformValue
};
}
return {
@@ -108,7 +121,8 @@ var Image = function (_Component) {
width: this.props.item.scaletwidth,
height: this.props.height,
marginLeft: this.props.item.marginLeft,
- marginTop: 0
+ marginTop: 0,
+ transform: transformValue
};
}
}, { | Added a capability to display images properly after rotating them according to their orientation | benhowell_react-grid-gallery | train | js |
a861bcaf1b7fe1a9dfda8eeafeadd95d90c09028 | diff --git a/src/Console/ModelsCommand.php b/src/Console/ModelsCommand.php
index <HASH>..<HASH> 100644
--- a/src/Console/ModelsCommand.php
+++ b/src/Console/ModelsCommand.php
@@ -226,7 +226,12 @@ class ModelsCommand extends Command
$databasePlatform->registerDoctrineTypeMapping($yourTypeName, $doctrineTypeName);
}
- $columns = $schema->listTableColumns($table);
+ $database = null;
+ if (strpos($table, '.')) {
+ list($database, $table) = explode('.', $table);
+ }
+
+ $columns = $schema->listTableColumns($table, $database);
if ($columns) {
foreach ($columns as $column) { | Update ModelsCommand.php
Work with tables in different databases | barryvdh_laravel-ide-helper | train | php |
ec56e4da13028cee4b3c6d580d841ea236c8d0f4 | diff --git a/morphia/src/test/java/com/google/code/morphia/mapping/ReferencesInEmbeddedTest.java b/morphia/src/test/java/com/google/code/morphia/mapping/ReferencesInEmbeddedTest.java
index <HASH>..<HASH> 100644
--- a/morphia/src/test/java/com/google/code/morphia/mapping/ReferencesInEmbeddedTest.java
+++ b/morphia/src/test/java/com/google/code/morphia/mapping/ReferencesInEmbeddedTest.java
@@ -34,7 +34,7 @@ public class ReferencesInEmbeddedTest extends TestBase
}
@Entity
- private static class ReferencedEntity extends TestEntity{
+ public static class ReferencedEntity extends TestEntity{
private static final long serialVersionUID = 1L;
String foo;
} | broken by revision <I>, referenced class must not be private. | MorphiaOrg_morphia | train | java |
75453fbc6ffe3bc35310be42bded4eba90f9948e | diff --git a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
index <HASH>..<HASH> 100644
--- a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
+++ b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
@@ -112,9 +112,14 @@ def worker_efficiency(task, node):
for i, row in task.iterrows():
if math.isnan(row['epoch_time_running']):
- # skip tasks with no running start time.
+ # skip tasks with no running start time
continue
- for j in range(int(row['epoch_time_running']), int(row['epoch_time_returned']) + 1):
+ if math.isnan(row['epoch_time_returned']):
+ # there is no end time for this, so we should assume the "end" time
+ etr = end
+ else:
+ etr = int(row['epoch_time_returned'])
+ for j in range(int(row['epoch_time_running']), etr + 1):
worker_plot[j - start] += 1
fig = go.Figure(
data=[go.Scatter(x=list(range(0, end - start + 1)), | Fix worker efficiency plot when tasks are still in progress (#<I>)
Prior to this PR, this plot would crash. After this PR, this plot regards task without an end time as still in progress and occupying a worker. | Parsl_parsl | train | py |
2873c252665be08b8e5e1db20e36b263a26e061f | diff --git a/test/com/opera/core/systems/OperaSettingsIntegrationTest.java b/test/com/opera/core/systems/OperaSettingsIntegrationTest.java
index <HASH>..<HASH> 100644
--- a/test/com/opera/core/systems/OperaSettingsIntegrationTest.java
+++ b/test/com/opera/core/systems/OperaSettingsIntegrationTest.java
@@ -138,6 +138,7 @@ public class OperaSettingsIntegrationTest extends OperaDriverTestCase {
}
@Test
+ @Ignore(value = "Needs investigation")
public void loggingFileReceivesOutput() throws IOException {
tmp.create();
File log = tmp.newFile("operadriver.log"); | Ignore loggingFileReceivesOutput test for now | operasoftware_operaprestodriver | train | java |
cc02e208b3a712cd5ef098b5bc819fbb1df275d4 | diff --git a/newcore.py b/newcore.py
index <HASH>..<HASH> 100644
--- a/newcore.py
+++ b/newcore.py
@@ -43,7 +43,12 @@
# - a class implementing a given interface is also considered to implement all
# the ascendants of this interface
-__all__ = ['Component', 'implements', 'Interface', 'implementations']
+__all__ = ['Component', 'implements', 'Interface', 'implementations', 'TimeSideError']
+
+class TimeSideError(Exception):
+ """Exception base class for errors in TimeSide."""
+ # FIXME: is this redundant with Django's error handling ?
+ # FIXME: this class doesn't belong to the core
class Interface(object):
"""Marker base class for interfaces.""" | migrate TimeSideError into new core (shouldn't be there though) | Parisson_TimeSide | train | py |
b47e8387fd4c6dd727398b4177b063661590c8b6 | diff --git a/src/main/java/org/gaul/s3proxy/S3Proxy.java b/src/main/java/org/gaul/s3proxy/S3Proxy.java
index <HASH>..<HASH> 100644
--- a/src/main/java/org/gaul/s3proxy/S3Proxy.java
+++ b/src/main/java/org/gaul/s3proxy/S3Proxy.java
@@ -48,6 +48,12 @@ public final class S3Proxy {
private final Server server;
+ static {
+ // Prevent Jetty from rewriting headers:
+ // https://bugs.eclipse.org/bugs/show_bug.cgi?id=414449
+ System.setProperty("org.eclipse.jetty.http.HttpParser.STRICT", "true");
+ }
+
public S3Proxy(BlobStore blobStore, URI endpoint, String identity,
String credential) {
Preconditions.checkNotNull(blobStore); | Enable Jetty STRICT mode to get true header values
Previously Jetty rewrote header values such as the Content-Type, which
broke signing with the AWS SDK. Reference:
<URL> | gaul_s3proxy | train | java |
876c8de7b7218f57ac00dd3a0b7471952cc722c8 | diff --git a/AnyQt/_api.py b/AnyQt/_api.py
index <HASH>..<HASH> 100644
--- a/AnyQt/_api.py
+++ b/AnyQt/_api.py
@@ -45,7 +45,12 @@ def comittoapi(api):
if AnyQt.__SELECTED_API is not None:
comittoapi(AnyQt.__SELECTED_API)
elif "QT_API" in os.environ:
- comittoapi(os.environ["QT_API"].lower())
+ api = os.environ["QT_API"].lower()
+ if api == "pyqt":
+ # Qt.py allows both pyqt4 and pyqt to specify PyQt4.
+ # When run from anaconda-navigator, pyqt is used.
+ api = "pyqt4"
+ comittoapi(api)
else:
# Check sys.modules for existing imports
__existing = None
@@ -94,4 +99,4 @@ if "ANYQT_HOOK_BACKPORT" in os.environ:
for __bacportapi in os.environ["ANYQT_HOOK_BACKPORT"].split(","):
if __bacportapi.lower() != USED_API:
install_backport_hook(__bacportapi.lower())
- del install_backport_hook
\ No newline at end of file
+ del install_backport_hook | Allow pyqt as a valid QT_API env value | ales-erjavec_anyqt | train | py |
74f0d2fccad19c3ad4b91233c1bca5b506c3090f | diff --git a/general_utils.js b/general_utils.js
index <HASH>..<HASH> 100644
--- a/general_utils.js
+++ b/general_utils.js
@@ -100,4 +100,4 @@ function _mergeOptions(obj1, obj2){
}
}
return obj3;
-}
+}
\ No newline at end of file | add download functionality to s3 fine uploader | bigchaindb_js-utility-belt | train | js |
c75df08ac9ded3706e20b6529d409b48541dbde3 | diff --git a/Bundle/CoreBundle/Listener/ViewCssListener.php b/Bundle/CoreBundle/Listener/ViewCssListener.php
index <HASH>..<HASH> 100644
--- a/Bundle/CoreBundle/Listener/ViewCssListener.php
+++ b/Bundle/CoreBundle/Listener/ViewCssListener.php
@@ -3,6 +3,7 @@
namespace Victoire\Bundle\CoreBundle\Listener;
use Doctrine\ORM\EntityManager;
+use Victoire\Bundle\BusinessPageBundle\Entity\VirtualBusinessPage;
use Victoire\Bundle\CoreBundle\Builder\ViewCssBuilder;
use Victoire\Bundle\CoreBundle\Event\PageRenderEvent;
use Victoire\Bundle\WidgetMapBundle\Builder\WidgetMapBuilder;
@@ -36,7 +37,10 @@ class ViewCssListener
{
$currentView = $event->getCurrentView();
- if (!$viewHash = $currentView->getCssHash()) {
+ if($currentView instanceof VirtualBusinessPage)
+ {
+ $currentView->setCssHash($currentView->getTemplate()->getCssHash());
+ } else if (!$viewHash = $currentView->getCssHash()) {
$currentView->changeCssHash();
$this->entityManager->persist($currentView);
$this->entityManager->flush($currentView); | get CssHash from template if it's a virtual business page | Victoire_victoire | train | php |
2bf06c149ca10020b59f7286029c8b8f7ab737ec | diff --git a/structr-neo4j-bolt-driver/src/main/java/org/structr/bolt/BoltDatabaseService.java b/structr-neo4j-bolt-driver/src/main/java/org/structr/bolt/BoltDatabaseService.java
index <HASH>..<HASH> 100644
--- a/structr-neo4j-bolt-driver/src/main/java/org/structr/bolt/BoltDatabaseService.java
+++ b/structr-neo4j-bolt-driver/src/main/java/org/structr/bolt/BoltDatabaseService.java
@@ -231,7 +231,9 @@ public class BoltDatabaseService extends AbstractDatabaseService implements Grap
sessions.set(session);
} catch (ServiceUnavailableException ex) {
- throw new NetworkException(ex.getMessage(), ex);
+
+ logger.warn("ServiceUnavailableException in BoltDataBaseService.beginTx(). Retrying with timeout.");
+ return beginTx(1);
} catch (ClientException cex) {
logger.warn("Cannot connect to Neo4j database server at {}: {}", databaseUrl, cex.getMessage());
} | Adds a retry with timeout to beginTx for the BoltDatabaseService. | structr_structr | train | java |
622b501d486340c69a9efba3258f79bc2c367cbe | diff --git a/src/utils/translate.js b/src/utils/translate.js
index <HASH>..<HASH> 100644
--- a/src/utils/translate.js
+++ b/src/utils/translate.js
@@ -39,10 +39,10 @@ let getTranslate = function(element) {
}
let transform = element.style[transformProperty]
- let matches = transform.match(/translate3d\(\s*(-?\d+\.?\d*)px,\s*(-?\d+\.?\d*)px,.*\)/)
+ let matches = transform.match(/translate(3d)?\(\s*(-?\d+\.?\d*)px,\s*(-?\d+\.?\d*)px.*\)/)
if(matches) {
- result.left = +matches[1]
- result.top = +matches[2]
+ result.left = +matches[2]
+ result.top = +matches[3]
}
return result | [fix] fix translateUtils bug which cause picker abnormal behavior | wdfe_wdui | train | js |
f9370fe48fe525dd11d4b4110b080ee87b1cdd26 | diff --git a/lib/Widget/Upload.php b/lib/Widget/Upload.php
index <HASH>..<HASH> 100644
--- a/lib/Widget/Upload.php
+++ b/lib/Widget/Upload.php
@@ -243,7 +243,6 @@ class Upload extends Image
$this->file = $this->dir . '/' . $fileName;
if (!$this->moveUploadedFile($uploadedFile['tmp_name'], $this->file)) {
$this->addError('cantMove');
- $this->logger->critical($this->cantMoveMessage);
return false;
} | removed logger in file upload, logger error meesages by yourself in business logic is better | twinh_wei | train | php |
6b7adb50df49f2a32185bbc396c406ff8947f9ec | diff --git a/optaplanner-core/src/main/java/org/optaplanner/core/impl/io/jaxb/JaxbIO.java b/optaplanner-core/src/main/java/org/optaplanner/core/impl/io/jaxb/JaxbIO.java
index <HASH>..<HASH> 100644
--- a/optaplanner-core/src/main/java/org/optaplanner/core/impl/io/jaxb/JaxbIO.java
+++ b/optaplanner-core/src/main/java/org/optaplanner/core/impl/io/jaxb/JaxbIO.java
@@ -82,7 +82,7 @@ public final class JaxbIO<T> {
marshaller.marshal(root, domResult);
} catch (JAXBException jaxbException) {
String errMessage = String.format("Unable to marshall the %s to XML.", rootClass.getName());
- throw new RuntimeException(errMessage, jaxbException);
+ throw new IllegalStateException(errMessage, jaxbException);
}
// See https://stackoverflow.com/questions/46708498/jaxb-marshaller-indentation.
@@ -103,7 +103,7 @@ public final class JaxbIO<T> {
transformer.transform(new DOMSource(domResult.getNode()), new StreamResult(writer));
} catch (TransformerException transformerException) {
String errMessage = String.format("Unable to format %s XML.", rootClass.getName());
- throw new RuntimeException(errMessage, transformerException);
+ throw new IllegalStateException(errMessage, transformerException);
}
}
} | Throw dedicated exceptions instead of generic ones | kiegroup_optaplanner | train | java |
cc46cb002a2c214480be0cfe3c1c06861b36150b | diff --git a/pkg/scheduler/framework/plugins/noderesources/most_allocated.go b/pkg/scheduler/framework/plugins/noderesources/most_allocated.go
index <HASH>..<HASH> 100644
--- a/pkg/scheduler/framework/plugins/noderesources/most_allocated.go
+++ b/pkg/scheduler/framework/plugins/noderesources/most_allocated.go
@@ -25,7 +25,7 @@ import (
// based on the maximum of the average of the fraction of requested to capacity.
//
// Details:
-// (cpu(MaxNodeScore * sum(requested) / capacity) + memory(MaxNodeScore * sum(requested) / capacity)) / weightSum
+// (cpu(MaxNodeScore * requested * cpuWeight / capacity) + memory(MaxNodeScore * requested * memoryWeight / capacity) + ...) / weightSum
func mostResourceScorer(resToWeightMap resourceToWeightMap) func(requested, allocable resourceToValueMap) int64 {
return func(requested, allocable resourceToValueMap) int64 {
var nodeScore, weightSum int64 | Update the comment in pkg/scheduler/framework/plugins/noderesources/most_allocated.go | kubernetes_kubernetes | train | go |
15f46a4a730a5a0b63b8cfc383b77ca6e82bf3c4 | diff --git a/Controller/ConnectController.php b/Controller/ConnectController.php
index <HASH>..<HASH> 100644
--- a/Controller/ConnectController.php
+++ b/Controller/ConnectController.php
@@ -220,7 +220,12 @@ class ConnectController extends Controller
if ($currentToken instanceof OAuthToken) {
// Update user token with new details
- $this->authenticateUser($request, $currentUser, $service, $accessToken, false);
+ $newToken =
+ is_array($accessToken) &&
+ (isset($accessToken['access_token']) || isset($accessToken['oauth_token'])) ?
+ $accessToken : $currentToken->getRawToken();
+
+ $this->authenticateUser($request, $currentUser, $service, $newToken, false);
}
if ($targetPath = $this->getTargetPath($session)) { | Check that the new access token have need keys
Check that the new access token have access_token or oauth_token keys | hwi_HWIOAuthBundle | train | php |
996407c9795986fd5365b1547c4224619f8789a5 | diff --git a/pyspider/run.py b/pyspider/run.py
index <HASH>..<HASH> 100755
--- a/pyspider/run.py
+++ b/pyspider/run.py
@@ -216,8 +216,7 @@ def scheduler(ctx, no_xmlrpc, xmlrpc_host, xmlrpc_port,
return scheduler
if not no_xmlrpc:
- # using run_in_thread here fails to complete and does not open the port
- utils.run_in_subprocess(scheduler.xmlrpc_run, port=xmlrpc_port, bind=xmlrpc_host)
+ utils.run_in_thread(scheduler.xmlrpc_run, port=xmlrpc_port, bind=xmlrpc_host)
scheduler.run()
@@ -267,8 +266,7 @@ def fetcher(ctx, no_xmlrpc, xmlrpc_host, xmlrpc_port, poolsize, proxy, user_agen
return fetcher
if not no_xmlrpc:
- # using run_in_thread here fails to complete and does not open the port
- utils.run_in_subprocess(fetcher.xmlrpc_run, port=xmlrpc_port, bind=xmlrpc_host)
+ utils.run_in_thread(fetcher.xmlrpc_run, port=xmlrpc_port, bind=xmlrpc_host)
fetcher.run() | using run_in_thread for scheduler and fetcher dispatch again | binux_pyspider | train | py |
4287b85898da2a8b32e09719f2e5430a62bab678 | diff --git a/ipyrad/assemble/cluster_across.py b/ipyrad/assemble/cluster_across.py
index <HASH>..<HASH> 100644
--- a/ipyrad/assemble/cluster_across.py
+++ b/ipyrad/assemble/cluster_across.py
@@ -322,9 +322,13 @@ def cluster(data, noreverse, ipyclient):
progressbar(100, done,
" clustering across | {}".format(elapsed))
else:
- ## if process is done, break.
+ ## if process is done, break. this is a backup catcher
if not proc.poll():
break
+ else:
+ print('in poll')
+ ## another catcher to let vsearch cleanup after clustering is done
+ proc.wait()
elapsed = datetime.timedelta(seconds=int(time.time()-start))
progressbar(100, 100, | added wait() to make sure vsearch is finished writing before moving on | dereneaton_ipyrad | train | py |
bc09c5f5de2103cefc33af1c5c505aa0587e10ee | diff --git a/fedmsg/consumers/tweet.py b/fedmsg/consumers/tweet.py
index <HASH>..<HASH> 100644
--- a/fedmsg/consumers/tweet.py
+++ b/fedmsg/consumers/tweet.py
@@ -83,7 +83,7 @@ class TweetBotConsumer(FedmsgConsumer):
if link:
try:
resp = requests.get('http://da.gd/s', params=dict(url=link))
- link = resp.text.split('>')[1].split('<')[0]
+ link = resp.text.strip()
except Exception:
self.log.warn("Bad URI for http://da.gd %r" % link)
link = "" | We no longer need to parse this. Thanks @CodeBlock! | fedora-infra_fedmsg | train | py |
fd59fa494546055e9cba8168e882d4513cc0f356 | diff --git a/spec/suites/spiderfySpec.js b/spec/suites/spiderfySpec.js
index <HASH>..<HASH> 100644
--- a/spec/suites/spiderfySpec.js
+++ b/spec/suites/spiderfySpec.js
@@ -294,4 +294,4 @@
map.remove();
document.body.removeChild(div);
-});
\ No newline at end of file
+}); | Added missing blank line end of file spiderfySpec | Leaflet_Leaflet.markercluster | train | js |
492eb517b818444d1834c7f4306175d7663bfe87 | diff --git a/data/conjugations.js b/data/conjugations.js
index <HASH>..<HASH> 100644
--- a/data/conjugations.js
+++ b/data/conjugations.js
@@ -3,6 +3,7 @@ module.exports = {
'PerfectTense': 'have taken',
'PluPerfectTense': 'had taken',
'PastTense': 'took',
+ 'Participle': 'taken',
'FuturePerfect': 'will have taken'
},
'can': { | Add participle form of "take" | spencermountain_compromise | train | js |
4b47987ab4dd628bc30f6a811082d80335c593e3 | diff --git a/lib/zendesk_apps_support/product.rb b/lib/zendesk_apps_support/product.rb
index <HASH>..<HASH> 100644
--- a/lib/zendesk_apps_support/product.rb
+++ b/lib/zendesk_apps_support/product.rb
@@ -1,13 +1,13 @@
module ZendeskAppsSupport
module Product
- # The ids below match the values in the database, do not change them!
+ # The product code below match the values in the database, do not change them!
PRODUCTS_AVAILABLE = [
{
- id: 1,
+ code: 1,
name: 'support',
},
{
- id: 2,
+ code: 2,
name: 'zopim',
}
].freeze | fix(product): change id to code | zendesk_zendesk_apps_support | train | rb |
7775534a499d76c9e5d2ff3bd2b88d9c7f908c90 | diff --git a/test/AbstractFactory/ConfigAbstractFactoryTest.php b/test/AbstractFactory/ConfigAbstractFactoryTest.php
index <HASH>..<HASH> 100644
--- a/test/AbstractFactory/ConfigAbstractFactoryTest.php
+++ b/test/AbstractFactory/ConfigAbstractFactoryTest.php
@@ -26,7 +26,7 @@ class ConfigAbstractFactoryTest extends \PHPUnit_Framework_TestCase
$abstractFactory = new ConfigAbstractFactory();
$serviceManager = new ServiceManager();
- self::assertFalse($abstractFactory->canCreate($serviceManager, 'MarcoSucks'));
+ self::assertFalse($abstractFactory->canCreate($serviceManager, 'OcramiusSucks'));
}
public function testCanCreate() | Modified who sucks to avoid offending other Marcos | mxc-commons_mxc-servicemanager | train | php |
54666d778c628a8ec195067b6c39ca675aa70f5c | diff --git a/lettuce/bin.py b/lettuce/bin.py
index <HASH>..<HASH> 100755
--- a/lettuce/bin.py
+++ b/lettuce/bin.py
@@ -75,6 +75,10 @@ def main(args=sys.argv[1:]):
except ValueError:
pass
+ tags = None
+ if options.tags:
+ tags = [tag.strip('@') for tag in options.tags]
+
runner = lettuce.Runner(
base_path,
scenarios=options.scenarios,
@@ -82,7 +86,7 @@ def main(args=sys.argv[1:]):
random=options.random,
enable_xunit=options.enable_xunit,
xunit_filename=options.xunit_file,
- tags=[tag.strip('@') for tag in options.tags],
+ tags=tags,
)
result = runner.run() | Fixes issue where no tag was passed in CLI | aloetesting_aloe_django | train | py |
b7c790b21427ad419d8144d6c56b0655e2148adc | diff --git a/lib/rdkafka/producer.rb b/lib/rdkafka/producer.rb
index <HASH>..<HASH> 100644
--- a/lib/rdkafka/producer.rb
+++ b/lib/rdkafka/producer.rb
@@ -1,3 +1,5 @@
+require "securerandom"
+
module Rdkafka
# A producer for Kafka messages. To create a producer set up a {Config} and call {Config#producer producer} on that.
class Producer
@@ -9,11 +11,12 @@ module Rdkafka
# @private
def initialize(native_kafka)
+ @id = SecureRandom.uuid
@closing = false
@native_kafka = native_kafka
# Makes sure, that the producer gets closed before it gets GCed by Ruby
- ObjectSpace.define_finalizer(self, proc { close })
+ ObjectSpace.define_finalizer(@id, proc { close })
# Start thread to poll client for delivery callbacks
@polling_thread = Thread.new do
@@ -41,6 +44,8 @@ module Rdkafka
# Close this producer and wait for the internal poll queue to empty.
def close
+ ObjectSpace.undefine_finalizer(@id)
+
return unless @native_kafka
# Indicate to polling thread that we're closing | Use id instead of object to bind the the finalizer for a producer | appsignal_rdkafka-ruby | train | rb |
d1bbe8cf9458a1ca62f4f10502be05eb243affad | diff --git a/spec/google/api_client/simple_file_store_spec.rb b/spec/google/api_client/simple_file_store_spec.rb
index <HASH>..<HASH> 100644
--- a/spec/google/api_client/simple_file_store_spec.rb
+++ b/spec/google/api_client/simple_file_store_spec.rb
@@ -22,10 +22,6 @@ describe Google::APIClient::Service::SimpleFileStore do
FILE_NAME = 'test.cache'
- before(:all) do
- File.delete(FILE_NAME) if File.exists?(FILE_NAME)
- end
-
describe 'with no cache file' do
before(:each) do
File.delete(FILE_NAME) if File.exists?(FILE_NAME) | File is being deleted before each test anyway | googleapis_google-api-ruby-client | train | rb |
24400f7acaf439b0dd468b1e49147ba1d127ec8c | diff --git a/lib/hot-client.js b/lib/hot-client.js
index <HASH>..<HASH> 100755
--- a/lib/hot-client.js
+++ b/lib/hot-client.js
@@ -1,6 +1,6 @@
/* eslint-disable */
// remove trailing slash from webpack public path
-// see https://github.com/glenjamin/webpack-hot-middleware/issues/173
+// see https://github.com/glenjamin/webpack-hot-middleware/issues/154
const tmpPublicPath = __webpack_public_path__;
__webpack_public_path__ = __webpack_public_path__.replace(/\/$/, '');
const client = require('webpack-hot-middleware/client?dynamicPublicPath=true'); | Link to the correct issue on GH | DynamoMTL_shopify-pipeline | train | js |
865e38ee097adac9801f0633c8443dbf834252e5 | diff --git a/tests/TestCase.php b/tests/TestCase.php
index <HASH>..<HASH> 100644
--- a/tests/TestCase.php
+++ b/tests/TestCase.php
@@ -25,6 +25,8 @@ abstract class TestCase extends \PHPUnit_Framework_TestCase
const SEARCH_BUCKET_TYPE = 'yokozuna';
const SET_BUCKET_TYPE = 'sets';
+ const TEST_IMG = "Basho_Man_Super.png";
+
/**
* @var \Basho\Riak|null
*/
diff --git a/tests/scenario/EncodedDataTest.php b/tests/scenario/EncodedDataTest.php
index <HASH>..<HASH> 100644
--- a/tests/scenario/EncodedDataTest.php
+++ b/tests/scenario/EncodedDataTest.php
@@ -12,7 +12,6 @@ use Basho\Riak\Command;
*/
class EncodedDataTest extends TestCase
{
- const TEST_IMG = "Basho_Man_Super.png";
const BUCKET = "encodeddata";
const BASE64_KEY = "base64";
const BINARY_KEY = "binary.png"; | Move test image constant to TestCase.php so it can be used by other tests. | basho_riak-php-client | train | php,php |
269a88eeced5bde8cc06e0890302e8388eda39d5 | diff --git a/nationstates/objects.py b/nationstates/objects.py
index <HASH>..<HASH> 100644
--- a/nationstates/objects.py
+++ b/nationstates/objects.py
@@ -209,7 +209,7 @@ class Nationstates(NSPropertiesMixin, NSSettersMixin, RateLimit):
time.sleep(30)
if safe == "safe":
- vsafe = xrls(45)
+ vsafe = xrls(40)
resp = self._load(user_agent=user_agent, no_ratelimit=no_ratelimit,
within_time=30, amount_allow=vsafe)
self.api_mother.__xrls__ = int(self.data["request_instance"] | reverted safe back to <I> requests | DolphDev_pynationstates | train | py |
f0901c0c756472534365cfef748169b20db2db31 | diff --git a/lib/MyHomeKitTypes.js b/lib/MyHomeKitTypes.js
index <HASH>..<HASH> 100644
--- a/lib/MyHomeKitTypes.js
+++ b/lib/MyHomeKitTypes.js
@@ -250,9 +250,10 @@ class MyHomeKitTypes extends homebridgeLib.CustomHomeKitTypes {
// Used by homebridge-zp in Speaker service.
this.createCharacteristic('Balance', uuid('044'), {
format: this.formats.INT,
- minValue: -10,
- maxValue: 10,
- minStep: 1,
+ unit: this.units.PERCENTAGE,
+ minValue: -100,
+ maxValue: 100,
+ minStep: 5,
perms: [this.perms.READ, this.perms.NOTIFY, this.perms.WRITE]
}) | Update MyHomeKitTypes.js
Change my.Characteristic.Balance to percentage from -<I>% to <I>% in steps of 5%, in line with how Sonos exposes balance. | ebaauw_homebridge-lib | train | js |
d3cceb687806e30fabe01f394b4f221bd41ff981 | diff --git a/pycbc/inference/sampler/dynesty.py b/pycbc/inference/sampler/dynesty.py
index <HASH>..<HASH> 100644
--- a/pycbc/inference/sampler/dynesty.py
+++ b/pycbc/inference/sampler/dynesty.py
@@ -123,9 +123,21 @@ class DynestySampler(BaseSampler):
dlogz = float(cp.get(section, "dlogz"))
loglikelihood_function = \
get_optional_arg_from_config(cp, section, 'loglikelihood-function')
+
+ # optional arguments for dynesty
+ cargs = {'bound': str,
+ 'bootstrap': int,
+ 'enlarge': float,
+ 'update_interval': float,
+ 'sample': str}
+ extra = {}
+ for karg in cargs:
+ if cp.has_option(section, karg):
+ extra[karg] = cargs[karg](cp.get(section, karg))
+
obj = cls(model, nlive=nlive, dlogz=dlogz, nprocesses=nprocesses,
loglikelihood_function=loglikelihood_function,
- use_mpi=use_mpi)
+ use_mpi=use_mpi, **extra)
return obj
def checkpoint(self): | allow to pass initialization args to dynesty (#<I>)
* allow to pass initialization args to dynesty
* Update dynesty.py
* Update dynesty.py | gwastro_pycbc | train | py |
728bcf999e6a4bc2248b25b1ddba86292c595af8 | diff --git a/tools/queryLongLivedCerts.go b/tools/queryLongLivedCerts.go
index <HASH>..<HASH> 100644
--- a/tools/queryLongLivedCerts.go
+++ b/tools/queryLongLivedCerts.go
@@ -15,6 +15,7 @@ func main() {
es := elastigo.NewConn()
es.Domain = "localhost:9200"
step := 0
+ seen := make(map[string]bool)
for {
filter := fmt.Sprintf(`{"query":{"bool":{"must":[
{"match": {"validationInfo.Mozilla.isValid": "true"}},
@@ -28,7 +29,6 @@ func main() {
if len(res.Hits.Hits) == 0 {
break
}
- seen := make(map[string]bool)
thirtyNineMonths := time.Duration(28512 * time.Hour)
for _, storedCert := range res.Hits.Hits {
cert := new(certificate.Certificate) | discard dup certs in long lived cert querying tool | mozilla_tls-observatory | train | go |
ae2552d875aa7e203deea12b0162e3578c2c722f | diff --git a/webapp.js b/webapp.js
index <HASH>..<HASH> 100644
--- a/webapp.js
+++ b/webapp.js
@@ -88,9 +88,16 @@ module.exports = function(ctx, cb) {
repo.sauce_access_key = q['$set']['github_config.$.sauce_access_key'] = sauce_access_key
}
if (sauce_browsers) {
+ var invalid = false
try {
sauce_browsers = JSON.parse(sauce_browsers)
+ if (!Array.isArray(sauce_browsers)) {
+ invalid = true
+ }
} catch(e) {
+ invalid = true
+ }
+ if (invalid) {
return error("Error decoding `sauce_browsers` parameter - must be JSON-encoded array")
}
repo.sauce_browsers = q['$set']['github_config.$.sauce_browsers'] = sauce_browsers | ensure sauce_browsers is an array | Strider-CD_strider-sauce | train | js |
0a12778a8c4105a49084fe17833c25de12fdf9c0 | diff --git a/viper.go b/viper.go
index <HASH>..<HASH> 100644
--- a/viper.go
+++ b/viper.go
@@ -30,9 +30,9 @@ import (
"strings"
"time"
+ "github.com/jackspirou/cast"
"github.com/kr/pretty"
"github.com/mitchellh/mapstructure"
- "github.com/spf13/cast"
jww "github.com/spf13/jwalterweatherman"
"github.com/spf13/pflag"
) | using my own version of github.com/spf<I>/cast for now | spf13_viper | train | go |
823982ab5377c1ac405e844d2c3a5fd56f1d5f06 | diff --git a/safe/definitions/hazard.py b/safe/definitions/hazard.py
index <HASH>..<HASH> 100644
--- a/safe/definitions/hazard.py
+++ b/safe/definitions/hazard.py
@@ -355,7 +355,6 @@ hazard_tsunami = {
tsunami_hazard_population_classes,
tsunami_hazard_classes_ITB,
tsunami_hazard_population_classes_ITB,
- generic_hazard_classes,
],
'compulsory_fields': [hazard_value_field],
'fields': hazard_fields, | Remove generic classification from tsunami hazard fix #<I> | inasafe_inasafe | train | py |
289b9ea84386c9568a8db33390dc3d75b7fc2370 | diff --git a/spec/scheduler_spec.rb b/spec/scheduler_spec.rb
index <HASH>..<HASH> 100644
--- a/spec/scheduler_spec.rb
+++ b/spec/scheduler_spec.rb
@@ -188,7 +188,7 @@ describe Rufus::Scheduler do
@scheduler.in(t) { seen = true }
- sleep 0.1 while seen != true
+ wait_until { seen }
end
it 'accepts point in time and duration indifferently (#at)' do
@@ -199,7 +199,7 @@ describe Rufus::Scheduler do
@scheduler.at(t) { seen = true }
- sleep 0.1 while seen != true
+ wait_until { seen }
end
end
@@ -940,9 +940,7 @@ describe Rufus::Scheduler do
sleep 2
end
- sleep 0.4
-
- expect(@scheduler.jobs.size).to eq(1)
+ wait_until { @scheduler.jobs.size > 0 }
end
it 'does not return unscheduled jobs' do | Use wait_until {} in scheduler_spec.rb | jmettraux_rufus-scheduler | train | rb |
97c7b34283bf6d273ec92796a94237628b03977b | diff --git a/src/I18n/I18n.php b/src/I18n/I18n.php
index <HASH>..<HASH> 100644
--- a/src/I18n/I18n.php
+++ b/src/I18n/I18n.php
@@ -445,7 +445,11 @@ class i18n
// Load the binary to ensure projects always attempts
// to compile the same basic string.
- $activeTranslations = Translations::fromMoFile($moFile);
+ if ($locale->hasMoFile()) {
+ $activeTranslations = Translations::fromMoFile($moFile);
+ } else {
+ $activeTranslations = new Translations();
+ }
// Add local modifications to the default set should there
// be any.
@@ -453,7 +457,7 @@ class i18n
$this->hardTranslationSetMerge($locale, Translations::fromPoFile($envPoFile), $activeTranslations);
}
- $textDomain = Strata::app()->i18n->getTextdomain();
+ $textDomain = Strata::i18n()->getTextdomain();
$activeTranslations->setDomain($textDomain);
$activeTranslations->setHeader('Language', $locale->getCode());
$activeTranslations->setHeader('Text Domain', $textDomain); | fixed first scan issue where code attempted to load .mo file that did not exist | strata-mvc_strata | train | php |
5b1996a0c6960e03d632398ec63ab20ca77bc966 | diff --git a/src/CookieJar.php b/src/CookieJar.php
index <HASH>..<HASH> 100644
--- a/src/CookieJar.php
+++ b/src/CookieJar.php
@@ -55,9 +55,11 @@ class CookieJar
*/
public function clear($key)
{
- if (isset($this->cookies[$key])) {
- unset($this->cookies[$key]);
+ if (!array_key_exists($key, $this->cookies)) {
+ return;
}
+
+ unset($this->cookies[$key]);
}
public function clearAll() | Fix: Allow to clear cookie when the value is null | refinery29_piston | train | php |
6c0b9373ee7490ab0453e7f1fb65ea1743ec0a90 | diff --git a/lfs/transfer_queue.go b/lfs/transfer_queue.go
index <HASH>..<HASH> 100644
--- a/lfs/transfer_queue.go
+++ b/lfs/transfer_queue.go
@@ -434,8 +434,8 @@ func (q *TransferQueue) batchApiRoutine() {
if q.canRetryObject(t.Oid(), err) {
q.retry(t)
} else {
- q.wait.Done()
errOnce.Do(func() { q.errorc <- err })
+ q.wait.Done()
}
} | lfs/tq: send error before terminating object
Previously, decrementing the surrounding `sync.WaitGroup` allowed the
`q.errorc` channel to be closed before this send could occur. This racy
behavior introduced a send-on-closed-channel panic, which is fixed in this
commit. | git-lfs_git-lfs | train | go |
a898ed40cfd4f99e11d9026caf709dff3baf318a | diff --git a/kite-data/kite-data-core/src/main/java/org/kitesdk/data/DatasetDescriptor.java b/kite-data/kite-data-core/src/main/java/org/kitesdk/data/DatasetDescriptor.java
index <HASH>..<HASH> 100644
--- a/kite-data/kite-data-core/src/main/java/org/kitesdk/data/DatasetDescriptor.java
+++ b/kite-data/kite-data-core/src/main/java/org/kitesdk/data/DatasetDescriptor.java
@@ -956,7 +956,7 @@ public class DatasetDescriptor {
if (RESOURCE_URI_SCHEME.equals(location.getScheme())) {
return Resources.getResource(location.getRawSchemeSpecificPart()).openStream();
} else {
- Path filePath = new Path(location).makeQualified(defaultFS, new Path("/"));
+ Path filePath = new Path(qualifiedUri(location));
// even though it was qualified using the default FS, it may not be in it
FileSystem fs = filePath.getFileSystem(conf);
return fs.open(filePath); | CDK-<I>: Use the qualifiedUri method to properly check the scheme first.
Closes #<I> | kite-sdk_kite | train | java |
218ade9c2c37d373a7093e3892fa371ef1889d5a | diff --git a/shared/teams/add-people/index.js b/shared/teams/add-people/index.js
index <HASH>..<HASH> 100644
--- a/shared/teams/add-people/index.js
+++ b/shared/teams/add-people/index.js
@@ -95,10 +95,11 @@ const AddPeople = (props: Props) => (
{!Styles.isMobile && (
<Kb.Box style={{...Styles.globalStyles.flexBoxColumn, padding: Styles.globalMargins.medium}}>
<Kb.Box style={{...Styles.globalStyles.flexBoxRow, justifyContent: 'center'}}>
- <Kb.Button
+ <Kb.WaitingButton
disabled={!props.numberOfUsersSelected}
onClick={props.onOpenRolePicker}
label={props.addButtonLabel}
+ waitingKey={null}
type="Primary"
/>
</Kb.Box> | add waiting button to team add (#<I>) | keybase_client | train | js |
e44f18a412f98912d004becae346877b3a9d9e8c | diff --git a/ghost/admin/app/components/gh-members-recipient-select.js b/ghost/admin/app/components/gh-members-recipient-select.js
index <HASH>..<HASH> 100644
--- a/ghost/admin/app/components/gh-members-recipient-select.js
+++ b/ghost/admin/app/components/gh-members-recipient-select.js
@@ -157,10 +157,10 @@ export default class GhMembersRecipientSelect extends Component {
}
yield Promise.all([
- this.store.query('member', {filter: 'status:free', limit: 1}).then((res) => {
+ this.store.query('member', {filter: 'subscribed:true,status:free', limit: 1}).then((res) => {
this.freeMemberCount = res.meta.pagination.total;
}),
- this.store.query('member', {filter: 'status:-free', limit: 1}).then((res) => {
+ this.store.query('member', {filter: 'subscribed:true,status:-free', limit: 1}).then((res) => {
this.paidMemberCount = res.meta.pagination.total;
})
]); | 🐛 Fixed member count in publish menu not matching subscription status
no issue
- the `subscribed:true` filter was missed in the member count queries when we switched from `<GhMembersSegmentSelect>` to `<GhMembersRecipientSelect>` (<URL>) | TryGhost_Ghost | train | js |
41cbf6f92cf4ce8f6f428a237895176ca8e3bed2 | diff --git a/gbdxtools/s3.py b/gbdxtools/s3.py
index <HASH>..<HASH> 100644
--- a/gbdxtools/s3.py
+++ b/gbdxtools/s3.py
@@ -118,13 +118,12 @@ class S3(object):
def delete(self, location):
'''Delete content in bucket/prefix/location.
Location can be a directory or a file (e.g., my_dir or my_dir/my_image.tif)
- If location is a directory, all files in the directory are deleted.
- If it is a file, then that file is deleted.
+ Location is a wildcard match - 'image' will delete anything that matches "image*" including "image/foo/*"
+ This treats objects purely as a key/value store and does not respect directories.
Limited to deleting 1000 objects at a time.
Args:
- location (str): S3 location within prefix. Can be a directory or
- a file (e.g., my_dir or my_dir/my_image.tif).
+ location (str): S3 location within prefix
'''
bucket = self.info['bucket']
prefix = self.info['prefix'] | DOCS ONLY: clarify s3 delete behavior | DigitalGlobe_gbdxtools | train | py |
80f7ea69c6fdedfced0463b5bced964e3663ca5e | diff --git a/src/main/java/com/groupon/mesos/util/HttpProtocolReceiver.java b/src/main/java/com/groupon/mesos/util/HttpProtocolReceiver.java
index <HASH>..<HASH> 100644
--- a/src/main/java/com/groupon/mesos/util/HttpProtocolReceiver.java
+++ b/src/main/java/com/groupon/mesos/util/HttpProtocolReceiver.java
@@ -84,6 +84,8 @@ public class HttpProtocolReceiver
this.shutdownHandler = new GracefulShutdownHandler(pathHandler);
this.httpServer = Undertow.builder()
+ .setIoThreads(2)
+ .setWorkerThreads(16)
.addHttpListener(localAddress.getPort(), localAddress.getHost())
.setHandler(shutdownHandler)
.build(); | Adjust the number of io and worker threads for the receiver.
The defaults are based on cpu count and for large boxes (<I> cores),
one ends up with ~<I> io threads and <I> workers... | groupon_jesos | train | java |
f2578bb37c5a93c31a0a8aabd19786eab6d4d797 | diff --git a/association.py b/association.py
index <HASH>..<HASH> 100644
--- a/association.py
+++ b/association.py
@@ -118,20 +118,23 @@ class DiffieHelmanAssociator(object):
expiry = w3c2datetime(getResult('expiry'))
delta = now - issued
- replace_after = time.mktime(delta + replace_after)
- expiry = time.mktime(delta + expiry)
-
- secret = results.get('mac_key')
- if secret is None:
- # Regular DH response
+ replace_after = time.mktime((delta + replace_after).utctimetuple())
+ expiry = time.mktime((delta + expiry).utctimetuple())
+
+ session_type = results.get('session_type')
+ if session_type is None:
+ secret = getResult('mac_key')
+ else:
+ if session_type != 'DH-SHA1':
+ raise RuntimeError("Unknown Session Type: %r"
+ % (session_type,))
+
dh_server_pub = a2long(from_b64(getResult('dh_server_public')))
enc_mac_key = getResult('enc_mac_key')
dh_shared = pow(dh_server_pub, priv_key, p)
secret = strxor(from_b64(enc_mac_key), sha1(long2a(dh_shared)))
- # else: looks like the server wasn't up for DH ...
-
return Association(server_url, assoc_handle, secret,
expiry, replace_after) | [project @ fixed a few more datetime problems and looking at session_type rather than mac_key] | openid_python-openid | train | py |
15d545c1bab74afe61ee13f8d29025c110e3fad6 | diff --git a/examples/basic/example8.rb b/examples/basic/example8.rb
index <HASH>..<HASH> 100644
--- a/examples/basic/example8.rb
+++ b/examples/basic/example8.rb
@@ -12,17 +12,13 @@ class X
end
end
-s = Sandbox.new
-priv = Privileges.new
-priv.allow_method :print
-
-s.run( "
+Sandbox.new.run( "
class ::X
def foo
print \"foo defined inside the sandbox\\n\"
end
end
- ", priv, :base_namespace => SandboxModule)
+ ", Privileges.allow_method(:print), :base_namespace => SandboxModule)
x = X.new # X class is not affected by the sandbox (The X Class defined in the sandbox is SandboxModule::X) | changed example 8 in order to user sugar syntax | tario_shikashi | train | rb |
a5b8da45a90f03eca1643bd92e04b2a397885c22 | diff --git a/core/__init__.py b/core/__init__.py
index <HASH>..<HASH> 100644
--- a/core/__init__.py
+++ b/core/__init__.py
@@ -20,7 +20,7 @@ except Exception:
__all__ = ['CSVModel',
'DualQuaternion',
'TerminateException',
- 'ExperimentLogger', 'EvaluateGraspsExperimentLogger',
+ 'ExperimentLogger',
'dump', 'load',
'BagOfPoints', 'BagOfVectors', 'Point', 'Direction', 'Plane3D',
'PointCloud', 'NormalCloud', 'ImageCoords', 'RgbCloud', 'RgbPointCloud', 'PointNormalCloud', | removed legacy grasp experiment logger from import | BerkeleyAutomation_autolab_core | train | py |
0ccce8f1b20b96cfb56b338133727b7166585224 | diff --git a/lib/extract/css/collectInfo.js b/lib/extract/css/collectInfo.js
index <HASH>..<HASH> 100644
--- a/lib/extract/css/collectInfo.js
+++ b/lib/extract/css/collectInfo.js
@@ -228,7 +228,7 @@ var atTmpl = require('basisjs-tools-ast').tmpl;
if (file.removals)
{
var removals = file.removals.map(function(fragment){
- return fragment.token;
+ return fragment.node || fragment.token; // new delaration uses `node` but previously `token` was using
});
processTemplateAst(file, removals, idMapRemovals, classMapRemovals);
} | extract: support for new format of removals item in template declaration | basisjs_basisjs-tools-build | train | js |
aa24e44a5306b3695f2993f99874ebbd31d2990c | diff --git a/lib/octodown/version.rb b/lib/octodown/version.rb
index <HASH>..<HASH> 100644
--- a/lib/octodown/version.rb
+++ b/lib/octodown/version.rb
@@ -1,3 +1,3 @@
module Octodown
- VERSION = '1.3.0'
+ VERSION = '1.4.0'.freeze
end | Bump to version <I> :confetti_ball: | ianks_octodown | train | rb |
Subsets and Splits
Java Commits in Train Set
Queries for all entries where the diff_languages column is 'java', providing a filtered dataset but without deeper analysis.
Java Commits Test Data
Returns a subset of 5000 entries from the dataset where the programming language difference is Java, providing basic filtering for exploration.
Java Commits Sample
Retrieves the first 1,000 records where the 'diff_languages' column is 'java', providing limited insight into the specific data entries.
Java Commits Validation Sample
Retrieves a sample of entries from the validation dataset where the diff languages are Java, providing limited insight into specific Java-related data points.
Java Commits in Validation
This query retrieves a limited sample of entries from the validation dataset where the programming language difference is Java, providing basic filtering with minimal insight.
Java Commits Sample
This query retrieves a sample of 100 records where the 'diff_languages' is 'java', providing basic filtering but limited analytical value.
Java Commits Sample
Retrieves 100 samples where the language difference is Java, providing basic filtering but minimal analytical value.
Java Commits Sample
Retrieves 10 samples where the diff_languages column is 'java', providing basic examples of data entries with this specific language.
Java Commits Validation Sample
Retrieves 1,000 records where the differences in languages are marked as Java, providing a snapshot of that specific subset but limited to raw data.
Java Commits Sample
This query retrieves 1000 random samples from the dataset where the programming language is Java, offering limited insight beyond raw data.