id stringlengths 23 26 | content stringlengths 182 2.49k |
|---|---|
codereview_java_data_12446 | }
}
- @Override
- public int getCount() {
- return knownSessions.size();
- }
-
@Override
public void remove(SessionId id) {
Require.nonNull("Session ID", id);
This method isn't needed (see above)
}
}
@Override
public void remove(SessionId id) {
Require.nonNull("Session ID", id); |
codereview_java_data_12448 | private final int pc;
private final String[] stack;
private final Object storage;
- private final BytesValue reason;
public StructLog(final TraceFrame traceFrame) {
depth = traceFrame.getDepth() + 1;
I think this should be a String here, but calculated from the `BytesValue` with `BytesValue.toUnprefixedString`. That would then match how the stack and return values are all formatted.
private final int pc;
private final String[] stack;
private final Object storage;
+ private final String reason;
public StructLog(final TraceFrame traceFrame) {
depth = traceFrame.getDepth() + 1; |
codereview_java_data_12453 | try (Writer w = Util.printWriter(logFile)) {
write(doc, w, indent);
}
- System.out.println("write; modCount=" + modCount);
} catch (IOException e) {
throw Util.throwAsRuntime("error while writing test reference log '"
+ logFile + "'", e);
Please remove system.out before merging
try (Writer w = Util.printWriter(logFile)) {
write(doc, w, indent);
}
} catch (IOException e) {
throw Util.throwAsRuntime("error while writing test reference log '"
+ logFile + "'", e); |
codereview_java_data_12459 | * Key: opId, value: [ mapId, key, oldValue ].
*/
@SuppressWarnings("unchecked")
- final MVMap<Long,Record<?,?>>[] undoLogs = (MVMap<Long,Record<?,?>>[])new MVMap[MAX_OPEN_TRANSACTIONS];
private final MVMap.Builder<Long, Record<?,?>> undoLogBuilder;
private final DataType<?> dataType;
Cast can be removed, (annotation is still needed).
* Key: opId, value: [ mapId, key, oldValue ].
*/
@SuppressWarnings("unchecked")
+ final MVMap<Long,Record<?,?>>[] undoLogs = new MVMap[MAX_OPEN_TRANSACTIONS];
private final MVMap.Builder<Long, Record<?,?>> undoLogBuilder;
private final DataType<?> dataType; |
codereview_java_data_12462 | if (!this.runningFlags.isWriteable()) {
long value = this.printTimes.getAndIncrement();
if ((value % 50000) == 0) {
- log.warn("message store could not write (may be disk full) so putMessage is forbidden");
}
return PutMessageStatus.SERVICE_NOT_AVAILABLE;
} else {
IMO, maybe is not a strict customer advice.
if (!this.runningFlags.isWriteable()) {
long value = this.printTimes.getAndIncrement();
if ((value % 50000) == 0) {
+ log.warn("the message store is not writable. It may be caused by one of the following reasons: " +
+ "the broker's disk is full, write to logic queue error, write to index file error, etc");
}
return PutMessageStatus.SERVICE_NOT_AVAILABLE;
} else { |
codereview_java_data_12464 | TreeSet<KeyExtent> tablets = tables.get(tableId);
if (tablets == null) {
- for (Entry<TableId,TreeSet<KeyExtent>> e : tables.entrySet()) {
- TableId key = e.getKey();
- TreeSet<KeyExtent> value = e.getValue();
- String tableName = Tables.getTableName(opts.getServerContext(), key);
- checkTable(key, value, tableName);
- }
tables.clear();
I think this line could have stayed the same and the name lookup could have occurred in the method. Also, need to handle the case that looking up the name fails, in case the table is deleted, but there is still metadata left in the metadata tablet. That wasn't an issue before because we only used the table ID before.
TreeSet<KeyExtent> tablets = tables.get(tableId);
if (tablets == null) {
+ tables.forEach(CheckForMetadataProblems::checkTable);
tables.clear(); |
codereview_java_data_12465 | }
count++;
- if (hash.equals(Hash.EMPTY)) {
- // No need to go to the archive for an empty value
- nodeData.add(BytesValue.EMPTY);
- } else {
- worldStateArchive.getNodeData(hash).ifPresent(nodeData::add);
- }
}
return NodeDataMessage.create(nodeData);
}
This check would probably be better inside `WorldStateArchive` so any future callers automatically benefit. Sorry, I should have done that in the first place...
}
count++;
+ worldStateArchive.getNodeData(hash).ifPresent(nodeData::add);
}
return NodeDataMessage.create(nodeData);
} |
codereview_java_data_12470 | assertNoProblems(result);
}
@Test
void noModifiersInForEachBesideFinal() {
ParseResult<Statement> result = javaParser.parse(STATEMENT, provider("for(static transient int i : nums){}"));
Hmmm, three messages are a little overkill?
assertNoProblems(result);
}
+ @Test
+ void noMultipleVariablesInForEach() {
+ ParseResult<Statement> result = javaParser.parse(STATEMENT, provider("for(int i, j : nums){}"));
+ assertProblems(result,
+ "(line 1,col 1) A foreach statement's variable declaration must have exactly one variable declarator. Given: 2.");
+ }
+
@Test
void noModifiersInForEachBesideFinal() {
ParseResult<Statement> result = javaParser.parse(STATEMENT, provider("for(static transient int i : nums){}")); |
codereview_java_data_12474 | for (int i=0;i<itemcount;i++){
h.clearAlbumExclude(excludedFolders.remove(0).getAbsolutePath());
}
- Toast.makeText(ExcludedAlbumsActivity.this,"All folders are restored!",Toast.LENGTH_SHORT).show();
- finish();
}
}
Replace this toast with a snackbar.
for (int i=0;i<itemcount;i++){
h.clearAlbumExclude(excludedFolders.remove(0).getAbsolutePath());
}
+ SnackBarHandler.show(findViewById(R.id.rl_ea), "All folders are restored", Snackbar.LENGTH_SHORT).show();
+ new Handler().postDelayed(new Runnable() {
+ @Override
+ public void run() {
+ finish();
+ }
+ },1000);
}
} |
codereview_java_data_12478 | ImmutableSet.of("bucketing_version", StatsSetupConst.ROW_COUNT,
StatsSetupConst.RAW_DATA_SIZE, StatsSetupConst.TOTAL_SIZE, StatsSetupConst.NUM_FILES);
- private static final int METASTORE_POOL_SIZE = 15;
-
- // before variables
- protected static TestHiveMetastore metastore;
private static TestHiveShell shell;
private TestTables testTables;
Wouldn't it make sense to embed the `TestHiveMetastore` in `TestHiveShell` so that it is always available? Or is there a case where we use `TestHiveShell` without `TestHiveMetastore`?
ImmutableSet.of("bucketing_version", StatsSetupConst.ROW_COUNT,
StatsSetupConst.RAW_DATA_SIZE, StatsSetupConst.TOTAL_SIZE, StatsSetupConst.NUM_FILES);
private static TestHiveShell shell;
private TestTables testTables; |
codereview_java_data_12479 | }
} finally {
synchronized (this) {
writesInProgress--;
if (writesInProgress == 0)
this.notifyAll();
I think these checks should be left as-is. It may not be possible to for the current code result in this condition, but changes in the code could cause the assumptions to be violated. If that happens it would be nice to have the checks.
}
} finally {
synchronized (this) {
+ if (writesInProgress < 1)
+ throw new IllegalStateException("writesInProgress < 1 " + writesInProgress);
+
writesInProgress--;
if (writesInProgress == 0)
this.notifyAll(); |
codereview_java_data_12483 | if (privacy.isDisplayMessage()) {
setTicker(getStyledMessage(recipient, message));
} else if (privacy.isDisplayContact()) {
- setTicker(getStyledMessage(recipient, context.getString(R.string.SingleRecipientNotificationBuilder_signal)));
} else {
- setTicker(context.getString(R.string.SingleRecipientNotificationBuilder_signal));
}
}
shouldn't these be "new message"?
if (privacy.isDisplayMessage()) {
setTicker(getStyledMessage(recipient, message));
} else if (privacy.isDisplayContact()) {
+ setTicker(getStyledMessage(recipient, context.getString(R.string.SingleRecipientNotificationBuilder_new_message)));
} else {
+ setTicker(context.getString(R.string.SingleRecipientNotificationBuilder_new_message));
}
} |
codereview_java_data_12484 | this.running = true;
} catch (LDAPException ex) {
throw new RuntimeException("Server startup failed", ex);
}
}
With these changes, there is still no exception thrown when the file does not exist. We need to check if the LDIF exists and throw an exception if it does not. We can add the check for `isFile()` and `isReadable()` after we verify it exists. ```suggestion if (resources.length > 0) { if (!resources[0].exists()) { throw new IllegalArgumentException("Could not find LDIF " + this.ldif); } ```
this.running = true;
} catch (LDAPException ex) {
throw new RuntimeException("Server startup failed", ex);
+ } catch (IllegalArgumentException ex){
+ throw ex;
}
} |
codereview_java_data_12485 | return rootReference;
}
- private Page replacePage(CursorPos path, Page replacement, IntValueHolder unsavedMemoryHolder) {
int unsavedMemory = replacement.getMemory();
while (path != null) {
Page child = replacement;
This method can be static.
return rootReference;
}
+ private static Page replacePage(CursorPos path, Page replacement, IntValueHolder unsavedMemoryHolder) {
int unsavedMemory = replacement.getMemory();
while (path != null) {
Page child = replacement; |
codereview_java_data_12489 | calculateIncomeAndExpenseBooking = new CalculateIncomeAndExpenseBookingImpl(null, null, null, null, incomeAndExpenseReadPlatformService,officeReadPlatformService);
}
- @After
- public void tearDown() {
-
- }
/*
Case 1: All running balances has to be calculated before booking off income and expense account
If not running balances, then throw exception
remove this empty method: ```suggestion ```
calculateIncomeAndExpenseBooking = new CalculateIncomeAndExpenseBookingImpl(null, null, null, null, incomeAndExpenseReadPlatformService,officeReadPlatformService);
}
/*
Case 1: All running balances has to be calculated before booking off income and expense account
If not running balances, then throw exception |
codereview_java_data_12491 | finish();
}
- if (threadId > -1) {
- getSupportLoaderManager().initLoader(0,null,MediaPreviewActivity.this);
- } else {
- initializeViewPagerAdapter();
- }
}
private void initializeViewPager() {
this does io on the ui thread
finish();
}
+ getSupportLoaderManager().initLoader(0,null,MediaPreviewActivity.this);
}
private void initializeViewPager() { |
codereview_java_data_12495 | */
package org.h2.value;
import org.h2.engine.Constants;
import org.h2.util.MathUtils;
/**
- * Base class for collection values.
*/
abstract class ValueCollectionBase extends Value {
Base class for ROW and ARRAY values
*/
package org.h2.value;
+import org.h2.api.ErrorCode;
import org.h2.engine.Constants;
+import org.h2.engine.Mode;
+import org.h2.message.DbException;
import org.h2.util.MathUtils;
/**
+ * Base class for ARRAY and ROW values.
*/
abstract class ValueCollectionBase extends Value { |
codereview_java_data_12501 | package software.amazon.awssdk.util;
import static java.time.ZoneOffset.UTC;
-import static java.time.format.DateTimeFormatter.ISO_DATE_TIME;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static software.amazon.awssdk.util.DateUtils.ALTERNATE_ISO_8601_DATE_FORMAT;
import com.fasterxml.jackson.core.JsonFactory;
import java.nio.charset.Charset;
import java.text.ParseException;
import java.text.SimpleDateFormat;
These tests are still reimplementing most of `DateUtils`. We should just have some reference constants that we test against, like this: ```java private static final Instant TEST_INSTANT = Instant.ofEpochMilli(12345L); // The above instant formatted as an ISO 8601 Date private static final Instant FORMATTED_ISO_8601 = "...."; @Test public void formatIso8601Date() { assertThat(DateUtils.parseIso8601Date(FORMATTED_ISO_8601)).isEqualTo(TEST_INSTANT); } ```
package software.amazon.awssdk.util;
import static java.time.ZoneOffset.UTC;
+import static java.time.format.DateTimeFormatter.ISO_INSTANT;
+import static java.time.format.DateTimeFormatter.ISO_INSTANT;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static software.amazon.awssdk.util.DateUtils.ALTERNATE_ISO_8601_DATE_FORMAT;
import com.fasterxml.jackson.core.JsonFactory;
+
import java.nio.charset.Charset;
import java.text.ParseException;
import java.text.SimpleDateFormat; |
codereview_java_data_12503 | }
logger.info(jobs.size() + " jobs are running, adding a member");
createJetMember(config);
for (Job job : jobs) {
assertJobStatusEventually(job, RUNNING, 30);
}
If I understand the test correctly it has set delay for 1 second. It means that removing this `sleepSeconds(2);` will basically check that jobs are still running before cluster is scaled up. I think original test tried to check that they are running after scaling up.
}
logger.info(jobs.size() + " jobs are running, adding a member");
createJetMember(config);
+ sleepSeconds(2);
for (Job job : jobs) {
assertJobStatusEventually(job, RUNNING, 30);
} |
codereview_java_data_12505 | doEnqueue(awaitableCallback);
if (!await(awaitableCallback.countDown)) throw new InterruptedIOException();
- Throwable t = awaitableCallback.throwable.get();
if (t != null) {
if (t instanceof Error) throw (Error) t;
if (t instanceof IOException) throw (IOException) t;
It took me a while to follow how the throtlling works - to confirm, is it because the callbacks are run on a fixed number of threads? And the reason to use an `Executor` is to have both a limit on concurrent calls and control on the size of queued tasks, whereas a semaphore like the old ES implementation only handles concurrent? I think this makes sense just confirming my understanding.
doEnqueue(awaitableCallback);
if (!await(awaitableCallback.countDown)) throw new InterruptedIOException();
+ Throwable t = awaitableCallback.throwable;
if (t != null) {
if (t instanceof Error) throw (Error) t;
if (t instanceof IOException) throw (IOException) t; |
codereview_java_data_12510 | }
/**
- * Returns true if user is logged
*
- * @return true if user is logged; false otherwise
*/
public boolean isLogged() {
return authentication.getCurrentUser() != null;
Should this be "logged in"?
}
/**
+ * Returns true if user is logged in
*
+ * @return true if user is logged in; false otherwise
*/
public boolean isLogged() {
return authentication.getCurrentUser() != null; |
codereview_java_data_12518 | payload.get(payloadBytes);
}
Long timestamp = null;
- timestamp = messageAndOffset.message().timestamp();
-
return new Message(topicPartition.getTopic(), topicPartition.getPartition(),
messageAndOffset.offset(), keyBytes, payloadBytes, timestamp);
}
There will be a back compatibility issue here, either because: 1. People still use kafka 8 library 2. Or people on kafka 10 but don't want to use the timestamp from kafka message (they still want to retrieve the timestamp from other fields). I think we need to introduce a config param, something like useKafkaTimestamp, and it should be false by default.
payload.get(payloadBytes);
}
Long timestamp = null;
+ if (mConfig.getUseKafkaTimestamp()) {
+ timestamp = messageAndOffset.message().timestamp();
+ }
return new Message(topicPartition.getTopic(), topicPartition.getPartition(),
messageAndOffset.offset(), keyBytes, payloadBytes, timestamp);
} |
codereview_java_data_12539 | JavaTypeDefinition previousType;
if (node.getType() != null) { // static field or method
previousType = JavaTypeDefinition.forClass(node.getType());
} else { // non-static field or method
if (dotSplitImage.length == 1 && astArguments != null) { // method
I'm lost here... who is setting this on a static access? I don't see it being set in this method up to this point, and `ASTName` has no children....
JavaTypeDefinition previousType;
if (node.getType() != null) { // static field or method
+ // node.getType() has been set by the call to searchNodeNameForClass above
+ // node.getType() will have the value equal to the Class found by that method
previousType = JavaTypeDefinition.forClass(node.getType());
} else { // non-static field or method
if (dotSplitImage.length == 1 && astArguments != null) { // method |
codereview_java_data_12545 | Map<String,String> overrides =
CompactableUtils.getOverrides(job.getKind(), tablet, cInfo.localHelper, job.getFiles());
- String tmpFileName =
- tablet.getNextMapFilename(!cInfo.propagateDeletes ? "A" : "C").getMetaInsert() + "_tmp";
- TabletFile compactTmpName = new TabletFile(new Path(tmpFileName));
ExternalCompactionInfo ecInfo = new ExternalCompactionInfo();
Could have a little static function in CompactableUtils that does this, then call that function here and in CompactableUtils. That makes it easier for anyone looking at the code to find the two places where this happens.
Map<String,String> overrides =
CompactableUtils.getOverrides(job.getKind(), tablet, cInfo.localHelper, job.getFiles());
+ TabletFile compactTmpName = tablet.getNextMapFilenameForMajc(cInfo.propagateDeletes);
ExternalCompactionInfo ecInfo = new ExternalCompactionInfo(); |
codereview_java_data_12557 | import java.util.Collection;
import java.util.List;
import java.util.Optional;
-import org.junit.AssumptionViolatedException;
import org.junit.jupiter.api.extension.AfterAllCallback;
import org.junit.jupiter.api.extension.BeforeAllCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.testcontainers.containers.GenericContainer;
most of the simplification here is dropping support of local connections (docker only now)
import java.util.Collection;
import java.util.List;
import java.util.Optional;
import org.junit.jupiter.api.extension.AfterAllCallback;
import org.junit.jupiter.api.extension.BeforeAllCallback;
import org.junit.jupiter.api.extension.ExtensionContext;
+import org.opentest4j.TestAbortedException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.testcontainers.containers.GenericContainer; |
codereview_java_data_12558 | this.brokerController.getConfiguration().update(properties);
if (properties.containsKey("brokerPermission")) {
this.brokerController.getTopicConfigManager().getDataVersion().nextVersion();
- this.brokerController.registerBrokerAll(false, false,true);
}
} else {
log.error("string2Properties error");
The same issue.
this.brokerController.getConfiguration().update(properties);
if (properties.containsKey("brokerPermission")) {
this.brokerController.getTopicConfigManager().getDataVersion().nextVersion();
+ this.brokerController.registerBrokerAll(false, false, true);
}
} else {
log.error("string2Properties error"); |
codereview_java_data_12562 | private static final String TAG = "PodDBAdapter";
public static final String DATABASE_NAME = "Antennapod.db";
- public static final int VERSION = 1091000;
/**
* Maximum number of arguments for IN-operator.
Please only increment by 1.
private static final String TAG = "PodDBAdapter";
public static final String DATABASE_NAME = "Antennapod.db";
+ public static final int VERSION = 1090001;
/**
* Maximum number of arguments for IN-operator. |
codereview_java_data_12578 | @Override
public Options forTablets(Collection<KeyExtent> extents) {
- if (!extents.stream().map(e -> DataLevel.of(e.tableId()))
- .allMatch(dl -> dl == DataLevel.USER)) {
throw new IllegalArgumentException(
"readTablets only supported for user tablets at this time.");
}
Instead of `! allMatch`, you can just use `anyMatch`: ```suggestion if (extents.stream().map(e -> DataLevel.of(e.tableId())) .anyMatch(dl -> dl == DataLevel.USER)) { ``` You can also split up the two steps in the map function to use method references. Sometimes this is easier to understand: ```suggestion if (extents.stream().map(KeyExtent::tableId).map(DataLevel::of) .anyMatch(dl -> dl == DataLevel.USER)) { ```
@Override
public Options forTablets(Collection<KeyExtent> extents) {
+ if (extents.stream().map(KeyExtent::tableId).map(DataLevel::of)
+ .anyMatch(dl -> dl != DataLevel.USER)) {
throw new IllegalArgumentException(
"readTablets only supported for user tablets at this time.");
} |
codereview_java_data_12592 | return false;
}
MessageReference other = (MessageReference) o;
- return equals(other.accountUuid, other.folderName, other.uid, other.flag);
}
- @SuppressWarnings("UnusedParameters") // consistency with constructor
- public boolean equals(String accountUuid, String folderName, String uid, Flag flag) {
// noinspection StringEquality, we check for null values here
return ((accountUuid == this.accountUuid || (accountUuid != null && accountUuid.equals(this.accountUuid)))
&& (folderName == this.folderName || (folderName != null && folderName.equals(this.folderName)))
I don't see why we'd need an argument list that matches that of the constructor.
return false;
}
MessageReference other = (MessageReference) o;
+ return equals(other.accountUuid, other.folderName, other.uid);
}
+ public boolean equals(String accountUuid, String folderName, String uid) {
// noinspection StringEquality, we check for null values here
return ((accountUuid == this.accountUuid || (accountUuid != null && accountUuid.equals(this.accountUuid)))
&& (folderName == this.folderName || (folderName != null && folderName.equals(this.folderName))) |
codereview_java_data_12598 | long minLogicOffset = logicQueue.getMinLogicOffset();
SelectMappedBufferResult result = logicQueue.getIndexBuffer(minLogicOffset / ConsumeQueue.CQ_STORE_UNIT_SIZE);
- Long storeTime = getStoreTime(result);
- return storeTime;
}
return -1;
`return getStoreTime(result);` will suffice.
long minLogicOffset = logicQueue.getMinLogicOffset();
SelectMappedBufferResult result = logicQueue.getIndexBuffer(minLogicOffset / ConsumeQueue.CQ_STORE_UNIT_SIZE);
+ return getStoreTime(result);
}
return -1; |
codereview_java_data_12608 | outputStream = fs.create(new Path(file), false, bufferSize, (short) rep, block);
}
- BCFile.Writer _cbw = new BCFile.Writer(
- new RateLimitedOutputStream(outputStream, options.getRateLimiter()), compression, conf,
- false, acuconf);
return new RFile.Writer(_cbw, (int) blockSize, (int) indexBlockSize, samplerConfig, sampler);
}
What does this false parameter accomplish? Can we do it with an enum instead of a boolean?
outputStream = fs.create(new Path(file), false, bufferSize, (short) rep, block);
}
+ BCFile.Writer _cbw = new BCFile.Writer(outputStream, options.getRateLimiter(), compression,
+ conf, acuconf);
return new RFile.Writer(_cbw, (int) blockSize, (int) indexBlockSize, samplerConfig, sampler);
} |
codereview_java_data_12611 | return R.string.location_inside;
case OUTSIDE:
return R.string.location_outside;
default:
return R.string.unknown;
}
only 2 are implemented now?
return R.string.location_inside;
case OUTSIDE:
return R.string.location_outside;
+ case GOING_IN:
+ return R.string.location_going_in;
+ case GOING_OUT:
+ return R.string.location_going_out;
default:
return R.string.unknown;
} |
codereview_java_data_12620 | private final ActionButtonCallback actionButtonCallback;
private final ActionButtonUtils actionButtonUtils;
private final boolean showOnlyNewEpisodes;
- private final MainActivity mainActivity;
public AllEpisodesRecycleAdapter(Context context,
MainActivity mainActivity,
Should not really cause an activity leak (as long as we set the adapter in the activity to null), but I could sleep better if this was a WeakReference
private final ActionButtonCallback actionButtonCallback;
private final ActionButtonUtils actionButtonUtils;
private final boolean showOnlyNewEpisodes;
+ private final WeakReference<MainActivity> mainActivityRef;
public AllEpisodesRecycleAdapter(Context context,
MainActivity mainActivity, |
codereview_java_data_12621 | .filter(l -> {
boolean pass = l % 2 == 0;
if (!pass) {
- UserMetrics.getCounter("dropped").inc();
}
- UserMetrics.getCounter("total").inc();
return pass;
})
.drainTo(sink);
this won't work because of lazySet
.filter(l -> {
boolean pass = l % 2 == 0;
if (!pass) {
+ Metrics.metric("dropped").inc();
}
+ Metrics.metric("total").inc();
return pass;
})
.drainTo(sink); |
codereview_java_data_12628 | import com.palantir.baseline.tasks.CheckExactDependenciesTask;
import org.gradle.api.Plugin;
import org.gradle.api.Project;
import org.gradle.api.plugins.JavaPlugin;
import org.gradle.api.plugins.JavaPluginConvention;
import org.gradle.api.tasks.SourceSet;
does this need to be configurable?
import com.palantir.baseline.tasks.CheckExactDependenciesTask;
import org.gradle.api.Plugin;
import org.gradle.api.Project;
+import org.gradle.api.artifacts.Configuration;
import org.gradle.api.plugins.JavaPlugin;
import org.gradle.api.plugins.JavaPluginConvention;
import org.gradle.api.tasks.SourceSet; |
codereview_java_data_12650 | return rules
.stream()
.filter(filter)
- .allMatch(
- rule -> {
- final boolean result = rule.validate(header, parent, protocolContext);
- return result;
- });
}
private Optional<BlockHeader> getParent(
The `result` variable appears to be redundant here.
return rules
.stream()
.filter(filter)
+ .allMatch(rule -> rule.validate(header, parent, protocolContext));
}
private Optional<BlockHeader> getParent( |
codereview_java_data_12655 | final WebSocketConfiguration webSocketConfiguration,
final MetricsConfiguration metricsConfiguration,
final Optional<PermissioningConfiguration> permissioningConfiguration,
- final Collection<EnodeURL> staticNodes)
- throws IOException {
checkNotNull(runnerBuilder);
I can't immediately see why this would now throw IOException. I assume there's a chain of calls it now flows up that's not obvious in GitHub's UI but could you double check it is required with the final code?
final WebSocketConfiguration webSocketConfiguration,
final MetricsConfiguration metricsConfiguration,
final Optional<PermissioningConfiguration> permissioningConfiguration,
+ final Collection<EnodeURL> staticNodes) {
checkNotNull(runnerBuilder); |
codereview_java_data_12656 | job.suspend();
assertJobStatusEventually(job, SUSPENDED);
assertThat(job.getSuspensionCause()).matches(JobSuspensionCause::requestedByUser);
-
assertThatThrownBy(job.getSuspensionCause()::errorCause)
- .isInstanceOf(UnsupportedOperationException.class)
.hasMessage("Suspension not caused by an error");
cancelAndJoin(job);
You can keep this assert against the `description` method.
job.suspend();
assertJobStatusEventually(job, SUSPENDED);
assertThat(job.getSuspensionCause()).matches(JobSuspensionCause::requestedByUser);
+ assertThat(job.getSuspensionCause().description()).isEqualTo("Requested by user");
assertThatThrownBy(job.getSuspensionCause()::errorCause)
+ .isInstanceOf(IllegalStateException.class)
.hasMessage("Suspension not caused by an error");
cancelAndJoin(job); |
codereview_java_data_12676 | if (header.getNumber() == BlockHeader.GENESIS_BLOCK_NUMBER) {
continue;
}
- if (header.getNumber() % 1 == 0) {
LOG.info("Import at block {}", header.getNumber());
}
if (blockchain.contains(header.getHash())) {
nit: why not just get rid of the if statement to log every block import
if (header.getNumber() == BlockHeader.GENESIS_BLOCK_NUMBER) {
continue;
}
+ if (header.getNumber() % 100 == 0) {
LOG.info("Import at block {}", header.getNumber());
}
if (blockchain.contains(header.getHash())) { |
codereview_java_data_12682 | assertThat(Base64Utils.decode(null)).isNull();
assertThat(Base64Utils.encode(null)).isNull();
assertThat(Base64Utils.decode(Base64Utils.encode("foo"))).isEqualTo("foo");
assertThat(Base64Utils.decode("juststring")).isEqualTo("juststring");
}
}
Could also test for `foo.*`.
assertThat(Base64Utils.decode(null)).isNull();
assertThat(Base64Utils.encode(null)).isNull();
assertThat(Base64Utils.decode(Base64Utils.encode("foo"))).isEqualTo("foo");
+ assertThat(Base64Utils.decode(Base64Utils.encode("foo.*.1"))).isEqualTo("foo.*.1");
assertThat(Base64Utils.decode("juststring")).isEqualTo("juststring");
}
} |
codereview_java_data_12693 | }
// Adding current time to snooze if we got staleData
log.debug("Notification text is: "+notification.text);
- if(notification.text == MainApp.sResources.getString(R.string.nsalarm_staledata)){
NotificationStore nstore = getPlugin().notificationStore;
long msToSnooze = SP.getInt("nsalarm_staledatavalue",15)*60*1000L;
log.debug("snooze nsalarm_staledatavalue in minutes is "+SP.getInt("nsalarm_staledatavalue",15)+"\n in ms is: "+msToSnooze+" currentTimeMillis is: "+System.currentTimeMillis());
`.equal` shoul be used
}
// Adding current time to snooze if we got staleData
log.debug("Notification text is: "+notification.text);
+ if(notification.text.equals(MainApp.sResources.getString(R.string.nsalarm_staledata))){
NotificationStore nstore = getPlugin().notificationStore;
long msToSnooze = SP.getInt("nsalarm_staledatavalue",15)*60*1000L;
log.debug("snooze nsalarm_staledatavalue in minutes is "+SP.getInt("nsalarm_staledatavalue",15)+"\n in ms is: "+msToSnooze+" currentTimeMillis is: "+System.currentTimeMillis()); |
codereview_java_data_12696 | configureJetService(config);
HazelcastInstanceImpl hazelcastInstance = ((HazelcastInstanceProxy)
Hazelcast.newHazelcastInstance(config.getHazelcastConfig())).getOriginal();
- Runtime.getRuntime().addShutdownHook(shutdownHookThread(hazelcastInstance));
- return new JetInstanceImpl(hazelcastInstance, config);
}
/**
We should also remove the shutdown hook.
configureJetService(config);
HazelcastInstanceImpl hazelcastInstance = ((HazelcastInstanceProxy)
Hazelcast.newHazelcastInstance(config.getHazelcastConfig())).getOriginal();
+ JetInstanceImpl jetInstance = new JetInstanceImpl(hazelcastInstance, config);
+ jetInstance.registerShutdownHook();
+ return jetInstance;
}
/** |
codereview_java_data_12720 | /**
* Ignores empty bulk import source directory, rather than throwing an IllegalArgumentException.
*/
ImportMappingOptions ignoreEmptyDir(boolean ignore);
Should include `@since 2.1.0` java doc tag since this will be new.
/**
* Ignores empty bulk import source directory, rather than throwing an IllegalArgumentException.
+ *
+ * @since 2.1.0
*/
ImportMappingOptions ignoreEmptyDir(boolean ignore); |
codereview_java_data_12729 | }
}
sb.append(" axiom{R");
- StringBuilder oldSB = sb;
- sb = new StringBuilder();
- extraSortParams.clear();
if (owise) {
sb.append("\\implies{R} (\n \\and{R} (\n \\not{R} (\n ");
for (Rule notMatching : functionRules.get(productionLabel)) {
This is very weird and not nice. What are you trying to accomplish here?
}
}
sb.append(" axiom{R");
+ Option<Set> sortParams = rule.att().getOption("sortParams", Set.class);
+ if (sortParams.nonEmpty()) {
+ for (Object sort : sortParams.get())
+ sb.append("," + sort);
+ }
+ sb.append("} ");
if (owise) {
sb.append("\\implies{R} (\n \\and{R} (\n \\not{R} (\n ");
for (Rule notMatching : functionRules.get(productionLabel)) { |
codereview_java_data_12735 | *
* @param <T> type of the function input, called <em>domain</em> of the function
* @param <R> type of the function output, called <em>codomain</em> of the function
*/
public interface PartialFunction<T, R> {
`@author` and `@since` tags missed
*
* @param <T> type of the function input, called <em>domain</em> of the function
* @param <R> type of the function output, called <em>codomain</em> of the function
+ * @author Daniel Dietrich
+ * @since 2.1.0
*/
public interface PartialFunction<T, R> { |
codereview_java_data_12744 | }
@Override
- public void setUDFContextSignature(String setSignature) {
- this.signature = setSignature;
}
private void storeInUDFContext(String key, Serializable value) throws IOException {
nit: instead of set or init, we've used new in other modules
}
@Override
+ public void setUDFContextSignature(String newSignature) {
+ this.signature = newSignature;
}
private void storeInUDFContext(String key, Serializable value) throws IOException { |
codereview_java_data_12762 | @Override
public Rewriter apply(Module module) {
if (!module.equals(def.executionModule())) {
- throw KEMException.criticalError("Invalid module specified for rewriting. Ocaml backend only supports rewriting over" +
" the definition's main module.");
}
return new Rewriter() {
this should probably say Haskell, not Ocaml.
@Override
public Rewriter apply(Module module) {
if (!module.equals(def.executionModule())) {
+ throw KEMException.criticalError("Invalid module specified for rewriting. Haskell backend only supports rewriting over" +
" the definition's main module.");
}
return new Rewriter() { |
codereview_java_data_12764 | package com.pinterest.secor.reader;
import com.pinterest.secor.common.SecorConfig;
import com.pinterest.secor.common.ZookeeperConnector;
import com.pinterest.secor.message.Message;
Is this a kafka bug? I would imagine the kafka consumer will reset to the correct offset after partition reassignment. And also do we need to clear any data structure and local data file because of the offset change?
package com.pinterest.secor.reader;
+import com.google.common.base.Strings;
+import com.google.common.collect.Sets;
import com.pinterest.secor.common.SecorConfig;
import com.pinterest.secor.common.ZookeeperConnector;
import com.pinterest.secor.message.Message; |
codereview_java_data_12765 | requestState = oldRequestWithState.get().getState();
}
- if (oldRequest.isPresent() && oldRequest.get().getInstancesSafe() < request.getInstancesSafe()) {
// Trigger cleanups for scale down
int newInstances = request.getInstancesSafe();
taskManager.getActiveTaskIdsForRequest(request.getId()).forEach((taskId) -> {
I think we want to swap the operands on this comparison.
requestState = oldRequestWithState.get().getState();
}
+ if (oldRequest.isPresent() && request.getInstancesSafe() < oldRequest.get().getInstancesSafe()) {
// Trigger cleanups for scale down
int newInstances = request.getInstancesSafe();
taskManager.getActiveTaskIdsForRequest(request.getId()).forEach((taskId) -> { |
codereview_java_data_12767 | List<String> mod = generatedRuleMappers.stream().map(gen -> "new " + gen + "()").collect(Collectors.toList());
expected = "Arrays.asList(" + String.join(", ", mod) + ");";
assertTrue(retrieved.contains(expected));
- System.out.println(retrieved);
}
-}
\ No newline at end of file
I would suggest to use some logger instead.
List<String> mod = generatedRuleMappers.stream().map(gen -> "new " + gen + "()").collect(Collectors.toList());
expected = "Arrays.asList(" + String.join(", ", mod) + ");";
assertTrue(retrieved.contains(expected));
}
\ No newline at end of file
+} |
codereview_java_data_12770 | import de.danoeh.antennapod.activity.MainActivity;
import de.danoeh.antennapod.adapter.NavListAdapter;
import de.danoeh.antennapod.adapter.SubscriptionsAdapter;
import de.danoeh.antennapod.core.feed.Feed;
import de.danoeh.antennapod.core.storage.DBReader;
import de.greenrobot.event.EventBus;
You should register with the EventDistributor and react to EventDistributor.FEED_LIST_UPDATE (feeds might be downloaded in a background thread and you probably want to display them as soon as they are processed)
import de.danoeh.antennapod.activity.MainActivity;
import de.danoeh.antennapod.adapter.NavListAdapter;
import de.danoeh.antennapod.adapter.SubscriptionsAdapter;
+import de.danoeh.antennapod.core.feed.EventDistributor;
import de.danoeh.antennapod.core.feed.Feed;
import de.danoeh.antennapod.core.storage.DBReader;
import de.greenrobot.event.EventBus; |
codereview_java_data_12776 | .insertOrUpdateCapacity(group, tenant, quota, maxSize, maxAggrCount, maxAggrSize);
if (insertOrUpdateResult) {
setSuccessResult(response, restResult);
- restResult.setMessage(String.format("Successfully updated %s for capacity information configuration for %s", targetFieldName, targetFieldValue));
return restResult;
}
setFailResult(response, restResult, 500);
- restResult.setMessage(String.format("%s failed to configure an update for capacity information for %s", targetFieldName, targetFieldValue));
return restResult;
} catch (Exception e) {
LOGGER.error("[updateCapacity] ", e);
The configuration information of %s has been successfully updated to %s
.insertOrUpdateCapacity(group, tenant, quota, maxSize, maxAggrCount, maxAggrSize);
if (insertOrUpdateResult) {
setSuccessResult(response, restResult);
+ restResult.setMessage(
+ String.format("Successfully updated %s for capacity information configuration for %s",
+ targetFieldName, targetFieldValue));
return restResult;
}
setFailResult(response, restResult, 500);
+ restResult.setMessage(
+ String.format("%s failed to configure an update for capacity information for %s", targetFieldName,
+ targetFieldValue));
return restResult;
} catch (Exception e) {
LOGGER.error("[updateCapacity] ", e); |
codereview_java_data_12781 | if (msgId == null) {
msgId = msgExt.getProperty(MessageConst.PROPERTY_ORIGIN_MESSAGE_ID);
}
- dlqLogger.info("[DLQ] topic:" + retryTopic + " consumerGroup:" + requestHeader.getGroup() + " msgId:" + msgId);
} else {
if (0 == delayLevel) {
delayLevel = 3 + msgExt.getReconsumeTimes();
Can we use the format version of info? like: `log.info("Topic: {}", topic)`?
if (msgId == null) {
msgId = msgExt.getProperty(MessageConst.PROPERTY_ORIGIN_MESSAGE_ID);
}
+ dlqLogger.info("[DLQ] topic:{} consumerGroup:{} msgId:{}", retryTopic, requestHeader.getGroup(), msgId);
} else {
if (0 == delayLevel) {
delayLevel = 3 + msgExt.getReconsumeTimes(); |
codereview_java_data_12784 | import com.google.errorprone.matchers.Matchers;
import com.google.errorprone.matchers.method.MethodMatchers;
import com.sun.source.tree.VariableTree;
@AutoService(BugChecker.class)
@BugPattern(
I think we can avoid adding a new utility: ```suggestion Matchers.hasModifier(Modifier.FINAL), ``` That will match `interface Iface { Logger log = LoggerFactory.getLogger(Iface.class); }` as `final` which is why we had to implement the custom `MoreMatchers.hasExplicitModifier` for our `RedundantModifier` check.
import com.google.errorprone.matchers.Matchers;
import com.google.errorprone.matchers.method.MethodMatchers;
import com.sun.source.tree.VariableTree;
+import java.lang.reflect.Modifier;
@AutoService(BugChecker.class)
@BugPattern( |
codereview_java_data_12787 | originalDf.select("id", "data").write()
.format("iceberg")
.mode("append")
- .option("extra-metadata.extra-key", "someValue")
- .option("extra-metadata.another-key", "anotherValue")
.save(tableLocation);
Table table = tables.load(tableLocation);
plumbing to pass extra information from write options
originalDf.select("id", "data").write()
.format("iceberg")
.mode("append")
+ .option("snapshot.property.extra-key", "someValue")
+ .option("snapshot.property.another-key", "anotherValue")
.save(tableLocation);
Table table = tables.load(tableLocation); |
codereview_java_data_12795 | // See SEC-2002
@Test
public void onSessionChangePublishesMigrationEventIfMigrateAttributesIsTrue() throws Exception {
- SessionFixationProtectionStrategy strategy = new SessionFixationProtectionStrategy();
HttpServletRequest request = new MockHttpServletRequest();
HttpSession session = request.getSession();
If it at all possible, I would like to avoid using reflection to invoke the onSessionChange method
// See SEC-2002
@Test
public void onSessionChangePublishesMigrationEventIfMigrateAttributesIsTrue() throws Exception {
+ SessionFixationProtectionStrategyWithPublicOnSessionChange strategy =
+ new SessionFixationProtectionStrategyWithPublicOnSessionChange();
HttpServletRequest request = new MockHttpServletRequest();
HttpSession session = request.getSession(); |
codereview_java_data_12801 | public void startLdapServer() throws Exception {
UnboundIdContainer server = new UnboundIdContainer(
validRootDn, validLdifClassPath);
- server.setApplicationContext(new GenericApplicationContext());
- List<Integer> ports = getDefaultPorts(1);
- server.setPort(ports.get(0));
-
- try {
- server.afterPropertiesSet();
- assertThat(server.getPort()).isEqualTo(ports.get(0));
- } finally {
- server.destroy();
- }
}
We want to allow a null value for the LDIF file, since it is not required for the developer to provide one.
public void startLdapServer() throws Exception {
UnboundIdContainer server = new UnboundIdContainer(
validRootDn, validLdifClassPath);
+ createAndRunServer(server);
} |
codereview_java_data_12805 | FormInfo formInfo = repositoryService.getFormModelByKey("form1");
SimpleFormModel formModel = (SimpleFormModel) formInfo.getFormModel();
- assertThat(formModel.getFields().size()).isOne();
- assertThat(formModel.getFields().get(0).getId()).isEqualTo("input1");
- assertThat(formModel.getFields().get(0).getName()).isEqualTo("Input1");
FormDeployment redeployment = repositoryService.createDeployment()
.addClasspathResource("org/flowable/form/engine/test/deployment/simple2.form")
How about: ``` assertThat(formModel.getFields()) .extracting(FormField::getId, FormField::getName) .containsExactly(tuple("input1", "Input1"); ```
FormInfo formInfo = repositoryService.getFormModelByKey("form1");
SimpleFormModel formModel = (SimpleFormModel) formInfo.getFormModel();
+ assertThat(formModel.getFields()).hasSize(1);
+ assertThat(formModel.getFields())
+ .extracting(FormField::getId, FormField::getName)
+ .containsExactly(tuple("input1", "Input1"));
FormDeployment redeployment = repositoryService.createDeployment()
.addClasspathResource("org/flowable/form/engine/test/deployment/simple2.form") |
codereview_java_data_12806 | } else {
continue;
}
-
if (!ignoreInterface(ifc.getDisplayName())) {
for (Enumeration<InetAddress> addrs = ifc.getInetAddresses(); addrs.hasMoreElements(); ) {
InetAddress address = addrs.nextElement();
- if ((IPUtil.PREFER_IPV6_ADDRESSES ? address instanceof Inet6Address
- : address instanceof Inet4Address) && !address.isLoopbackAddress()
- && isPreferredAddress(address)) {
LOG.debug("Found non-loopback interface: " + ifc.getDisplayName());
result = address;
}
`IPUtil.PREFER_IPV6_ADDRESSES ? address instanceof Inet6Address : address instanceof Inet4Address` Here is a separate line, which is more readable.
} else {
continue;
}
+
if (!ignoreInterface(ifc.getDisplayName())) {
for (Enumeration<InetAddress> addrs = ifc.getInetAddresses(); addrs.hasMoreElements(); ) {
InetAddress address = addrs.nextElement();
+ boolean isLegalIpVersion = IPUtil.PREFER_IPV6_ADDRESSES ? address instanceof Inet6Address
+ : address instanceof Inet4Address;
+ if (isLegalIpVersion && !address.isLoopbackAddress() && isPreferredAddress(address)) {
LOG.debug("Found non-loopback interface: " + ifc.getDisplayName());
result = address;
} |
codereview_java_data_12813 | for (DLNAResource dlnaResource : files) {
if (dlnaResource instanceof PlaylistFolder) {
- ((PlaylistFolder) dlnaResource).resolve();
}
}
We are already looping through `files` further down on line `1202`, can you add this logic to that existing loop for performance?
for (DLNAResource dlnaResource : files) {
if (dlnaResource instanceof PlaylistFolder) {
+ File f = new File(dlnaResource.getFileName());
+ if (dlnaResource.getLastModified() < f.lastModified()) {
+ ((PlaylistFolder) dlnaResource).resolve();
+ }
}
} |
codereview_java_data_12815 | while (radius <= MAX_RADIUS) {
try {
places = getFromWikidataQuery(curLatLng, lang, radius);
- }catch (Exception e){
Timber.d("exception in fetching nearby places", e.getLocalizedMessage());
return null;
}
Please use consistent whitespaces :)
while (radius <= MAX_RADIUS) {
try {
places = getFromWikidataQuery(curLatLng, lang, radius);
+ } catch (Exception e) {
Timber.d("exception in fetching nearby places", e.getLocalizedMessage());
return null;
} |
codereview_java_data_12833 | }
}
- if (MediaDatabase.isInstanciated()) {
MediaDatabase.shutdown();
if (configuration.getDatabaseLogging()) {
MediaDatabase.createReport();
I think it's spelled `Instantiated`
}
}
+ if (MediaDatabase.isInstantiated()) {
MediaDatabase.shutdown();
if (configuration.getDatabaseLogging()) {
MediaDatabase.createReport(); |
codereview_java_data_12836 | rowBuilder.set(i, Double.NaN);
break;
}
default:
rowBuilder.set(i, s);
break;
Could you move the three ```break;``` clauses to the end of the ```case``` block? No need to have three.
rowBuilder.set(i, Double.NaN);
break;
}
+ //fallthrough
default:
rowBuilder.set(i, s);
break; |
codereview_java_data_12837 | ClientContext.getConditionalWriterConfig(props);
assertNotNull(conditionalWriterConfig);
- // If the value of BATCH_WRITER_TIMEOUT_MAX is set to zero, Long.MAX_VALUE is returned.
// Effectively, this indicates there is no timeout for BATCH_WRITER_TIMEOUT_MAX. Due to this
// behavior, the test compares the return values differently. If a value of 0 is used, compare
// the return value using TimeUnit.MILLISECONDS, otherwise the value should be converted to
```suggestion // If the value of CONDITIONAL_WRITER_TIMEOUT_MAX is set to zero, Long.MAX_VALUE is returned. ```
ClientContext.getConditionalWriterConfig(props);
assertNotNull(conditionalWriterConfig);
+ // If the value of CONDITIONAL_WRITER_TIMEOUT_MAX is set to zero, Long.MAX_VALUE is returned.
// Effectively, this indicates there is no timeout for BATCH_WRITER_TIMEOUT_MAX. Due to this
// behavior, the test compares the return values differently. If a value of 0 is used, compare
// the return value using TimeUnit.MILLISECONDS, otherwise the value should be converted to |
codereview_java_data_12845 | private final RecentLogs recentLogs = new RecentLogs();
private long scansFetchedNanos = 0L;
private long compactsFetchedNanos = 0L;
- private final long fetchTimeNanos = TimeUnit.NANOSECONDS.convert(1, TimeUnit.MINUTES);
- private final long ageOffEntriesMillis = TimeUnit.MILLISECONDS.convert(15, TimeUnit.MINUTES);
/**
* Fetch the active scans but only if fetchTimeNanos has elapsed.
Small nit: TimeUnit has a `convert` method, but also has a series of `to<Unit>` methods that are more explicit. The `convert` method can be sometimes confusing, because it's not obvious the direction in which you're doing the converting. In general, I recommend the use of the more explicit `toNanos` and `toMillis`, etc. methods instead of `convert`.
private final RecentLogs recentLogs = new RecentLogs();
private long scansFetchedNanos = 0L;
private long compactsFetchedNanos = 0L;
+ private final long fetchTimeNanos = TimeUnit.MINUTES.toNanos(1);
+ private final long ageOffEntriesMillis = TimeUnit.MINUTES.toMillis(15);
/**
* Fetch the active scans but only if fetchTimeNanos has elapsed. |
codereview_java_data_12846 | pipeline.readFrom(TestSources.items(1, 2))
.map(Value::new)
.writeTo(Sinks.observable(OBSERVABLE_NAME));
- long timeout = 120;
// When
Observable<Value> observable = client().getObservable(OBSERVABLE_NAME);
Why not to simply inline it? Why 120?
pipeline.readFrom(TestSources.items(1, 2))
.map(Value::new)
.writeTo(Sinks.observable(OBSERVABLE_NAME));
// When
Observable<Value> observable = client().getObservable(OBSERVABLE_NAME); |
codereview_java_data_12863 | * @since 4.0
*/
@Nonnull
- File getAttachedDirectory(@Nonnull String id);
/**
* Returns the attached file to the job with the given id.
you can skip the `get` (so just have `attachedDirectory` as it's the convention on this interface
* @since 4.0
*/
@Nonnull
+ File attachedDirectory(@Nonnull String id);
/**
* Returns the attached file to the job with the given id. |
codereview_java_data_12869 | package net.sourceforge.pmd.lang.java.metrics.impl.visitors;
-import java.util.ArrayList;
import java.util.List;
import net.sourceforge.pmd.lang.java.ast.ASTConditionalExpression;
Would it work here to use `findChildrenOfType(ASTStatement.class)` instead of checking the type directly?
package net.sourceforge.pmd.lang.java.metrics.impl.visitors;
import java.util.List;
import net.sourceforge.pmd.lang.java.ast.ASTConditionalExpression; |
codereview_java_data_12877 | // Copy options if flag was set
if (cl.hasOption(createTableOptCopyConfig.getOpt())) {
if (shellState.getAccumuloClient().tableOperations().exists(tableName)) {
- final Iterable<Entry<String,String>> configuration =
- shellState.getAccumuloClient().tableOperations()
- .getPropertiesMap(cl.getOptionValue(createTableOptCopyConfig.getOpt())).entrySet();
- for (Entry<String,String> entry : configuration) {
if (Property.isValidTablePropertyKey(entry.getKey())) {
shellState.getAccumuloClient().tableOperations().setProperty(tableName, entry.getKey(),
entry.getValue());
Instead of calling the new method, only to call entrySet and assign it to an Iterable, many of these would make more sense being assigned to a variable that is of the Map type instead. The assignments to Iterable types are very clunky, and many of them probably only were written that way because that's the type that the old method returned, not because we wanted them to be an Iterable.
// Copy options if flag was set
if (cl.hasOption(createTableOptCopyConfig.getOpt())) {
if (shellState.getAccumuloClient().tableOperations().exists(tableName)) {
+ final Map<String,String> configuration = shellState.getAccumuloClient().tableOperations()
+ .getConfiguration(cl.getOptionValue(createTableOptCopyConfig.getOpt()));
+ for (Entry<String,String> entry : configuration.entrySet()) {
if (Property.isValidTablePropertyKey(entry.getKey())) {
shellState.getAccumuloClient().tableOperations().setProperty(tableName, entry.getKey(),
entry.getValue()); |
codereview_java_data_12886 | @Override
public boolean hasNext() {
while (that.hasNext()) {
- queue = queue.append(that.next());
if(queue.length() > n) {
queue = queue.dequeue()._2;
}
I would call `queue.enqueue(that.next())` (which is the same)
@Override
public boolean hasNext() {
while (that.hasNext()) {
+ queue = queue.enqueue(that.next());
if(queue.length() > n) {
queue = queue.dequeue()._2;
} |
codereview_java_data_12892 | private static final double FAV3_DEFAULT = 20;
private CheckBox suspendLoopCheckbox;
private CheckBox startActivityTTCheckbox;
- private CheckBox ESMCheckbox;
private Integer maxCarbs;
Could you please rename this to start wit a lowercase letter.
private static final double FAV3_DEFAULT = 20;
private CheckBox suspendLoopCheckbox;
private CheckBox startActivityTTCheckbox;
+ private CheckBox startEsTTCheckbox;
private Integer maxCarbs; |
codereview_java_data_12904 | /**
* TODO to be improved by loading only if required and only in the user language
* Load ingredients from (the server or) local database
- * If SharedPreferences lastDLIngredients is set try this :
* if file from the server is newer than last download delete database, load the file and fill database,
* else if database is empty, download the file and fill database,
* else return the content from the local database.
can we avoid abbreviating things ? lastDL >>lastDownload for instance ?
/**
* TODO to be improved by loading only if required and only in the user language
* Load ingredients from (the server or) local database
+ * If SharedPreferences lastDownloadIngredients is set try this :
* if file from the server is newer than last download delete database, load the file and fill database,
* else if database is empty, download the file and fill database,
* else return the content from the local database. |
codereview_java_data_12905 | Preconditions.checkArgument(serde.nonEmpty() || table.provider().nonEmpty(),
"Partition format should be defined");
- URI uri = locationUri.get();
String format = serde.nonEmpty() ? serde.get() : table.provider().get();
Map<String, String> partitionSpec = JavaConverters.mapAsJavaMapConverter(partition.spec()).asJava();
Looks like the main problem is that `new Path(locationUri.get())` is not the same as `new Path(locationUri.get().toString())`?
Preconditions.checkArgument(serde.nonEmpty() || table.provider().nonEmpty(),
"Partition format should be defined");
+ String uri = uriToString(locationUri.get());
String format = serde.nonEmpty() ? serde.get() : table.provider().get();
Map<String, String> partitionSpec = JavaConverters.mapAsJavaMapConverter(partition.spec()).asJava(); |
codereview_java_data_12908 | * exception occurs calling {@code supplier.get()}.
*/
static <T> Try<T> ofSupplier(Supplier<? extends T> supplier) {
- try {
- return new Success<>(supplier.get());
- } catch (Throwable t) {
- return new Failure<>(t);
- }
}
/**
Let's return `of(supplier::get)` to reduce duplicate code. The compiler will translate the method reference to a simple method call - there's no additional overhead. Please do this for all other static factory methods we introduce in this PR, too.
* exception occurs calling {@code supplier.get()}.
*/
static <T> Try<T> ofSupplier(Supplier<? extends T> supplier) {
+ return of(supplier::get);
}
/** |
codereview_java_data_12909 | this.password = password;
}
- public float getCurrentPlaybackSpeed() {
float speed = 0.0f;
if (!"global".equals(feedPlaybackSpeed)) {
I think this method should return some magic value if it is set to global speed (public static float SPEED_USE_GLOBAL=-1 etc). Currently, the UserPreferences object is accessed from all over the model. I would prefer to keep the model independent from reading preferences (that also makes it easier to test).
this.password = password;
}
+ float getCurrentPlaybackSpeed() {
float speed = 0.0f;
if (!"global".equals(feedPlaybackSpeed)) { |
codereview_java_data_12913 | return false;
}
- if (!validateRoundChangeIsForCurrentHeightAndTargetsFutureRound(msg)) {
return false;
}
The correct check is pepareCertRound.getRoundNumber() < RounChnageMesage.getRoundChangeIdentifier().getRoundIdentifier()
return false;
}
+ final ConsensusRoundIdentifier roundChangeTarget =
+ msg.getUnsignedMessageData().getRoundChangeIdentifier();
+
+ if (roundChangeTarget.getSequenceNumber() != currentRound.getSequenceNumber()) {
+ LOG.info("Invalid RoundChange message, not valid for local chain height.");
return false;
} |
codereview_java_data_12920 | contribution.setDescription("");
}
- String license = prefs.getString(Prefs.DEFAULT_LICENSE, null);
contribution.setLicense(license);
//FIXME: Add permission request here. Only executeAsyncTask if permission has been granted
`license` is nullable now. Add null checks before using it anywhere. Its used below in ``` Utils.licenseNameFor(license) ```
contribution.setDescription("");
}
+ String license = prefs.getString(Prefs.DEFAULT_LICENSE, Prefs.Licenses.CC_BY_SA_3);
contribution.setLicense(license);
//FIXME: Add permission request here. Only executeAsyncTask if permission has been granted |
codereview_java_data_12921 | import org.thoughtcrime.securesms.util.Util;
public class ConversationUpdateItem extends LinearLayout
- implements Recipients.RecipientsModifiedListener, Recipient.RecipientModifiedListener, View.OnClickListener
{
private static final String TAG = ConversationUpdateItem.class.getSimpleName();
dupe code, throw in a private onModified() that's called from both callbacks?
import org.thoughtcrime.securesms.util.Util;
public class ConversationUpdateItem extends LinearLayout
+ implements Recipients.RecipientsModifiedListener, Recipient.RecipientModifiedListener, Unbindable, View.OnClickListener
{
private static final String TAG = ConversationUpdateItem.class.getSimpleName(); |
codereview_java_data_12922 | import java.util.Objects;
import java.util.concurrent.CompletableFuture;
-import software.amazon.awssdk.annotations.SdkPreviewApi;
-import software.amazon.awssdk.annotations.SdkPublicApi;
import software.amazon.awssdk.transfer.s3.CompletedDirectoryUpload;
import software.amazon.awssdk.transfer.s3.DirectoryUpload;
import software.amazon.awssdk.utils.ToString;
-@SdkPublicApi
-@SdkPreviewApi
public final class DefaultDirectoryUpload implements DirectoryUpload {
private final CompletableFuture<CompletedDirectoryUpload> completionFuture;
Internal? We should double-check all annotations.
import java.util.Objects;
import java.util.concurrent.CompletableFuture;
+import software.amazon.awssdk.annotations.SdkInternalApi;
import software.amazon.awssdk.transfer.s3.CompletedDirectoryUpload;
import software.amazon.awssdk.transfer.s3.DirectoryUpload;
import software.amazon.awssdk.utils.ToString;
+@SdkInternalApi
public final class DefaultDirectoryUpload implements DirectoryUpload {
private final CompletableFuture<CompletedDirectoryUpload> completionFuture; |
codereview_java_data_12928 | */
package tech.pegasys.pantheon.cli.custom;
import java.net.URI;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
should we have consts for these magic numbers?
*/
package tech.pegasys.pantheon.cli.custom;
+import static com.google.common.base.Preconditions.checkArgument;
+
+import tech.pegasys.pantheon.util.NetworkUtility;
+
import java.net.URI;
import java.util.regex.Matcher;
import java.util.regex.Pattern; |
codereview_java_data_12929 | String videoType = "V_MPEG4/ISO/AVC";
String codecV = media.getCodecV();
- if (codecV != null && configuration.isUseMediaInfo()) {
if (codecV.equals(FormatConfiguration.MPEG2)) {
videoType = "V_MPEG-2";
} else if (codecV.equals(FormatConfiguration.H265)) {
Why do we need `isUseMediaInfo` here? That seems unrelated to the logic
String videoType = "V_MPEG4/ISO/AVC";
String codecV = media.getCodecV();
+ if (codecV != null) {
if (codecV.equals(FormatConfiguration.MPEG2)) {
videoType = "V_MPEG-2";
} else if (codecV.equals(FormatConfiguration.H265)) { |
codereview_java_data_12935 | if (!assignments.isEmpty()) {
Master.log.info(String.format("Assigning %d tablets", assignments.size()));
-
- for (Assignment assignment : assignments)
- store.setFutureLocation(assignment);
}
assignments.addAll(assigned);
for (Assignment a : assignments) {
It seems like this code would be less efficient, because it creates a new batch writer for each assignment, and that the method that handles the updates in Ample should deal with updating all of the assignments in one pass. What do you know about the performance implications of this change?
if (!assignments.isEmpty()) {
Master.log.info(String.format("Assigning %d tablets", assignments.size()));
+ store.setFutureLocations(assignments);
}
assignments.addAll(assigned);
for (Assignment a : assignments) { |
codereview_java_data_12959 | public FlatModule toFlatModule(Module m) {
CheckListDecl.check(m);
- String name = m.getName();
- moduleName = name;
Set<org.kframework.definition.Sentence> items = m.getItems().stream()
.filter(j -> !(j instanceof org.kframework.kil.Import))
Can we just get rid of the intermediate `name` instead? So say `moduleName = m.getName()` directly, and use `moduleName` later in this?
public FlatModule toFlatModule(Module m) {
CheckListDecl.check(m);
+ moduleName = m.getName();
Set<org.kframework.definition.Sentence> items = m.getItems().stream()
.filter(j -> !(j instanceof org.kframework.kil.Import)) |
codereview_java_data_12972 | .iterator();
while (iterator.hasNext()) {
String topic = iterator.next().getKey();
- if (topicList.getTopicList().contains(topic) || !specialTopic && (topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX) || topic.startsWith(MixAll.DLQ_GROUP_TOPIC_PREFIX))) {
iterator.remove();
}
}
!specialTopic && (topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX) || topic.startsWith(MixAll.DLQ_GROUP_TOPIC_PREFIX))
.iterator();
while (iterator.hasNext()) {
String topic = iterator.next().getKey();
+ if (topicList.getTopicList().contains(topic) || (!specialTopic && (topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX) || topic.startsWith(MixAll.DLQ_GROUP_TOPIC_PREFIX)))) {
iterator.remove();
}
} |
codereview_java_data_12994 | final KeyPair keyPair = KeyPair.generate();
return Transaction.builder()
- .nonce(0)
.gasPrice(privateTransaction.getGasPrice())
.gasLimit(privateTransaction.getGasLimit())
.to(getPrivacyPrecompileAddress())
Just to note - this can only be used in networks where the minimum gas price is 0
final KeyPair keyPair = KeyPair.generate();
return Transaction.builder()
+ .nonce(0L)
.gasPrice(privateTransaction.getGasPrice())
.gasLimit(privateTransaction.getGasLimit())
.to(getPrivacyPrecompileAddress()) |
codereview_java_data_12998 | private String shellCommandUserPlaceholder = "{USER}";
@JsonProperty
- private String shellCommandPidFile = ".shell_command_pid";
public SingularityExecutorConfiguration() {
super(Optional.of("singularity-executor.log"));
this is the PID file for the command that the shell command should operate on, right? `.shell_command_pid` seems a little ambiguous to me, can we make it clearer? maybe `.task-pid`?
private String shellCommandUserPlaceholder = "{USER}";
@JsonProperty
+ private String shellCommandPidFile = ".task-pid";
public SingularityExecutorConfiguration() {
super(Optional.of("singularity-executor.log")); |
codereview_java_data_13004 | builder.setContentTitle(recipient.toShortString());
builder.setContentText(notifications.get(0).getText());
builder.setContentIntent(notifications.get(0).getPendingIntent(context));
- builder.setContentInfo(notificationState.getMessageCount()+"");
if (masterSecret != null) {
builder.addAction(R.drawable.check, context.getString(R.string.MessageNotifier_mark_as_read),
String.valueOf() please. I know I'm a hypocrite.
builder.setContentTitle(recipient.toShortString());
builder.setContentText(notifications.get(0).getText());
builder.setContentIntent(notifications.get(0).getPendingIntent(context));
+ builder.setContentInfo(String.valueOf(notificationState.getMessageCount()));
if (masterSecret != null) {
builder.addAction(R.drawable.check, context.getString(R.string.MessageNotifier_mark_as_read), |
codereview_java_data_13005 | private CompactionCoordinatorService.Client coordinatorClient;
private final String coordinatorMissingMsg =
"Error getting the compaction coordinator. Check that it is running. It is not "
- + "started automatically with other cluster processes so must be started by running"
- + "'accumulo compaction-coordinator`.";
private EmbeddedWebServer server;
```suggestion "Error getting the compaction coordinator. Check that it is running. It is not " + "started automatically with other cluster processes so must be started by running " + "'accumulo compaction-coordinator'."; ```
private CompactionCoordinatorService.Client coordinatorClient;
private final String coordinatorMissingMsg =
"Error getting the compaction coordinator. Check that it is running. It is not "
+ + "started automatically with other cluster processes so must be started by running "
+ + "'accumulo compaction-coordinator'.";
private EmbeddedWebServer server; |
codereview_java_data_13006 | .addBreadcrumb(PreferenceActivity.getTitleOfPage(R.xml.preferences_gpodder));
config.index(R.xml.preferences_notifications)
.addBreadcrumb(PreferenceActivity.getTitleOfPage(R.xml.preferences_notifications));
- config.index(R.xml.feed_settings)
- .addBreadcrumb(PreferenceActivity.getTitleOfPage(R.xml.feed_settings));
-
}
}
If you just use getString here, you don't need to add the title to the activity, I think.
.addBreadcrumb(PreferenceActivity.getTitleOfPage(R.xml.preferences_gpodder));
config.index(R.xml.preferences_notifications)
.addBreadcrumb(PreferenceActivity.getTitleOfPage(R.xml.preferences_notifications));
}
} |
codereview_java_data_13008 | /**
* Narrows the given {@code CheckedFunction8<? super T1, ? super T2, ? super T3, ? super T4, ? super T5, ? super T6, ? super T7, ? super T8, ? extends R>} to {@code CheckedFunction8<T1, T2, T3, T4, T5, T6, T7, T8, R>}
*
- * @param wideFunction A {@code CheckedFunction8}
* @param <R> return type
* @param <T1> 1st argument
* @param <T2> 2nd argument
\[Checkstyle\] ERROR: Name 'T3' must match pattern '^\[A\-Z\]$'\.
/**
* Narrows the given {@code CheckedFunction8<? super T1, ? super T2, ? super T3, ? super T4, ? super T5, ? super T6, ? super T7, ? super T8, ? extends R>} to {@code CheckedFunction8<T1, T2, T3, T4, T5, T6, T7, T8, R>}
*
+ * @param f A {@code CheckedFunction8}
* @param <R> return type
* @param <T1> 1st argument
* @param <T2> 2nd argument |
codereview_java_data_13014 | /**
* HATest
*
- * @author yanglibo@qccr.com
- * @version HATest.java 2019年01月14日 17:34:31
*/
public class HATest {
private final String StoreMessage = "Once, there was a chance for me!";
It is better to remove "author" and "version" here.
/**
* HATest
*
*/
public class HATest {
private final String StoreMessage = "Once, there was a chance for me!"; |
codereview_java_data_13027 | }
/**
- * Utility for registring completable futures for cleanup if this EthTask is cancelled.
*
* @param <S> the type of data returned from the CompletableFuture
* @param subTaskFuture the future to be registered.
This creates a new future that is then immediately thrown away so I think this whole else block can just be removed.
}
/**
+ * Utility for registering completable futures for cleanup if this EthTask is cancelled.
*
* @param <S> the type of data returned from the CompletableFuture
* @param subTaskFuture the future to be registered. |
codereview_java_data_13031 | public class Util {
- public static final String VERSION_HINT_TXT_FILENAME = "version-hint.text";
private static final Logger LOG = LoggerFactory.getLogger(Util.class);
+1 for creating a constant. I think we can drop `TXT` in the constant name, `VERSION_HINT_FILENAME` should be descriptive enough.
public class Util {
+ public static final String VERSION_HINT_FILENAME = "version-hint.text";
private static final Logger LOG = LoggerFactory.getLogger(Util.class); |
codereview_java_data_13037 | private final SparkSession spark;
private final JavaSparkContext sparkContext;
- private AtomicInteger counter = new AtomicInteger();
protected BaseSparkAction(SparkSession spark) {
this.spark = spark;
I think this can be final. I'd also call it `jobCounter` or something to be specific. If we decide to keep the job counter in `BaseSparkAction`, then we can offer the following method: ``` protected JobGroupInfo newJobGroupInfo(String groupId, String desc) { return new JobGroupInfo(groupId, desc + "-" + jobCounter.incrementAndGet(), false); } ``` That way, we hide the complexity of assigning the job count from individual actions.
private final SparkSession spark;
private final JavaSparkContext sparkContext;
+ private final AtomicInteger jobCounter = new AtomicInteger();
protected BaseSparkAction(SparkSession spark) {
this.spark = spark; |
codereview_java_data_13039 | if (bl>contained.getBeginLine()) return false;
if (bl==contained.getBeginLine() && bc>contained.getBeginColumn()) return false;
if (container.getEndLine()<contained.getEndLine()) return false;
return !(container.getEndLine() == contained.getEndLine() && container.getEndColumn() < contained.getEndColumn());
}
return true;
Should it be <=?
if (bl>contained.getBeginLine()) return false;
if (bl==contained.getBeginLine() && bc>contained.getBeginColumn()) return false;
if (container.getEndLine()<contained.getEndLine()) return false;
+ // TODO < or <= ?
return !(container.getEndLine() == contained.getEndLine() && container.getEndColumn() < contained.getEndColumn());
}
return true; |
codereview_java_data_13052 | .getSourceSets()
.matching(ss -> hasCompileDependenciesMatching(proj, ss, this::isJunitJupiter))
.forEach(ss -> {
- log.info("Detected 'org:junit.jupiter:junit-jupiter', enabling useJUnitPlatform()");
String testTaskName = ss.getTaskName(null, "test");
Test testTask = (Test) proj.getTasks().findByName(testTaskName);
if (testTask == null) {
Wanna move the `log.info` line from 54 to just above here, then we can also include which task name we're enabling it for?
.getSourceSets()
.matching(ss -> hasCompileDependenciesMatching(proj, ss, this::isJunitJupiter))
.forEach(ss -> {
String testTaskName = ss.getTaskName(null, "test");
Test testTask = (Test) proj.getTasks().findByName(testTaskName);
if (testTask == null) { |
codereview_java_data_13057 | @Generated("com.github.javaparser.generator.core.node.TypeCastingGenerator")
public BlockComment asBlockComment() {
- return (BlockComment) this;
}
@Generated("com.github.javaparser.generator.core.node.TypeCastingGenerator")
this one could be empty I guess
@Generated("com.github.javaparser.generator.core.node.TypeCastingGenerator")
public BlockComment asBlockComment() {
+ throw new IllegalStateException(f("%s is not an BlockComment", this));
}
@Generated("com.github.javaparser.generator.core.node.TypeCastingGenerator") |
codereview_java_data_13061 | public void execute() {
try {
LOG.info("Starting Ethereum main loop ... ");
- Runtime.getRuntime().addShutdownHook(shutdownTask());
networkRunner.start();
pantheonController.getSynchronizer().start();
jsonRpc.ifPresent(service -> service.start().join());
I think this needs to be in a level above `Runner` somewhere - probably in `PantheonCommand`. The shutdown hook is really part of the CLI integration - if you're embedding Pantheon into a bigger program the bigger program may have different shutdown policies. Also the shutdown hook would hold a permanent reference to the Runner causing a memory leak.
public void execute() {
try {
LOG.info("Starting Ethereum main loop ... ");
networkRunner.start();
pantheonController.getSynchronizer().start();
jsonRpc.ifPresent(service -> service.start().join()); |
codereview_java_data_13064 | assertEquals(i, key.get());
}
reader.close();
}
}
Will other test see this file?
assertEquals(i, key.get());
}
reader.close();
+
+ assertTrue(fs.delete(new Path(manyMaps, SortedLogState.FAILED.getMarker())));
}
} |
codereview_java_data_13065 | @JsonProperty("forcePullImage") Optional<Boolean> forcePullImage,
@JsonProperty("parameters") Optional<Map<String, String>> parameters,
@JsonProperty("dockerParameters") Optional<List<SingularityDockerParameter>> dockerParameters) {
- if (dockerParameters.isPresent() && !dockerParameters.get().isEmpty() && parameters.isPresent() && !parameters.get().isEmpty()) {
- throw new IllegalArgumentException("Can only specify one of 'parameters' or 'dockerParameters'");
- }
this.image = image;
this.privileged = privileged;
this.network = Optional.fromNullable(network);
I'm a little hesitant of putting these kinds of exceptions in constructors -- if we somehow had a bogus one saved we'd be throwing every time it was deserialized. What do you think about moving this into `SingularityValidator` instead?
@JsonProperty("forcePullImage") Optional<Boolean> forcePullImage,
@JsonProperty("parameters") Optional<Map<String, String>> parameters,
@JsonProperty("dockerParameters") Optional<List<SingularityDockerParameter>> dockerParameters) {
this.image = image;
this.privileged = privileged;
this.network = Optional.fromNullable(network); |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.