id stringlengths 23 26 | content stringlengths 182 2.49k |
|---|---|
codereview_java_data_11128 | String uid = fetchList.getKeyedString("UID");
long msgSeq = response.getLong(0);
if (uid != null) {
- try {
- msgSeqUidMap.put(msgSeq, uid);
- if (K9MailLib.isDebug()) {
- Timber.v("Stored uid '%s' for msgSeq %d into map", uid, msgSeq);
- }
- } catch (Exception e) {
- Timber.e("Unable to store uid '%s' for msgSeq %d", uid, msgSeq);
}
}
Exception is a very broad catch case. Try to narrow this down a bit.
String uid = fetchList.getKeyedString("UID");
long msgSeq = response.getLong(0);
if (uid != null) {
+ msgSeqUidMap.put(msgSeq, uid);
+ if (K9MailLib.isDebug()) {
+ Timber.v("Stored uid '%s' for msgSeq %d into map", uid, msgSeq);
}
} |
codereview_java_data_11140 | @SuppressWarnings("unchecked")
<RET> RET attachFilter(@Nonnull PredicateEx<T> filterFn) {
checkSerializable(filterFn, "filterFn");
- PredicateEx<T> adaptedPred = (PredicateEx<T>) fnAdapter.adaptFilterFn(filterFn);
- FunctionEx<T, T> adaptedFn = MetricsOperatorUtil.wrap(t -> adaptedPred.test(t) ? t : null, filterFn);
- return (RET) attach(new MapTransform<T, T>("filter", transform, adaptedFn), fnAdapter);
}
@Nonnull
wouldn't it be better to do the wrapping as part of the adaptFilterFn? also in case of a stream, after adapting the original function is rewritten so you would need to do it before.
@SuppressWarnings("unchecked")
<RET> RET attachFilter(@Nonnull PredicateEx<T> filterFn) {
checkSerializable(filterFn, "filterFn");
+ FunctionEx<T, T> adaptedFn = fnAdapter.filterPredicateToFn(filterFn);
+ return (RET) attach(new MapTransform<>("filter", transform, adaptedFn), fnAdapter);
}
@Nonnull |
codereview_java_data_11146 | validateIntegerValue(domain.getMemberExpiryDays(), "memberExpiryDays");
validateIntegerValue(domain.getRoleCertExpiryMins(), "roleCertExpiryMins");
validateIntegerValue(domain.getServiceExpiryDays(), "serviceExpiryDays");
- validateIntegerValue(domain.getGroupExpiryDays(), "serviceExpiryDays");
validateIntegerValue(domain.getTokenExpiryMins(), "tokenExpiryMins");
validateString(domain.getApplicationId(), TYPE_COMPOUND_NAME, caller);
second argument should be "groupExpiryDays"
validateIntegerValue(domain.getMemberExpiryDays(), "memberExpiryDays");
validateIntegerValue(domain.getRoleCertExpiryMins(), "roleCertExpiryMins");
validateIntegerValue(domain.getServiceExpiryDays(), "serviceExpiryDays");
+ validateIntegerValue(domain.getGroupExpiryDays(), "groupExpiryDays");
validateIntegerValue(domain.getTokenExpiryMins(), "tokenExpiryMins");
validateString(domain.getApplicationId(), TYPE_COMPOUND_NAME, caller); |
codereview_java_data_11150 | if (null == producer) {
return;
}
-
try {
- byte[] bytes = this.getLayout().toByteArray(event);
- Message msg = new Message(topic, tag, bytes);
msg.getProperties().put(ProducerInstance.APPENDER_TYPE, ProducerInstance.LOG4J2_APPENDER);
//Send message and do not wait for the ack from the message broker.
May be we want to know which log message isn't send to server?
if (null == producer) {
return;
}
+ byte[] data = this.getLayout().toByteArray(event);
try {
+ Message msg = new Message(topic, tag, data);
msg.getProperties().put(ProducerInstance.APPENDER_TYPE, ProducerInstance.LOG4J2_APPENDER);
//Send message and do not wait for the ack from the message broker. |
codereview_java_data_11151 | private static final String METADATA_FOLDER_NAME = "metadata";
private static final String DATA_FOLDER_NAME = "data";
- private final FileIO fileIo;
-
private TableMetadata currentMetadata = null;
private String currentMetadataLocation = null;
private boolean shouldRefresh = true;
private int version = -1;
- protected BaseMetastoreTableOperations(FileIO fileIo) {
- this.fileIo = fileIo;
- }
@Override
public TableMetadata current() {
As an alternative, we can consider providing an abstract method that subclasses should implement. That way, we have more flexibility over initializing `FileIO`. Calling parent constructors must always be the first call in child constructors, which means we will have problems when initializing `FileIO` requires calling another method. @jerryshao what do you think?
private static final String METADATA_FOLDER_NAME = "metadata";
private static final String DATA_FOLDER_NAME = "data";
private TableMetadata currentMetadata = null;
private String currentMetadataLocation = null;
private boolean shouldRefresh = true;
private int version = -1;
+ protected BaseMetastoreTableOperations() { }
@Override
public TableMetadata current() { |
codereview_java_data_11154 | /**
* Check config info.
*/
- private void checkConfigInfo() {
// Dispatch taskes.
int listenerSize = cacheMap.get().size();
// Round up the longingTaskCount.
if you remove checkInfo, how do you handle doing dynamic task creation
/**
* Check config info.
*/
+ private void checkLongPollingTaskSize() {
// Dispatch taskes.
int listenerSize = cacheMap.get().size();
// Round up the longingTaskCount. |
codereview_java_data_11162 | private final Capabilities stereotype;
private final SessionId id;
- private final boolean isDraining = true;
private final Capabilities currentCapabilities;
private final Instant startTime;
Nodes are draining by default. Better to leave this as the default `false` value, no?
private final Capabilities stereotype;
private final SessionId id;
private final Capabilities currentCapabilities;
private final Instant startTime; |
codereview_java_data_11164 | String tableName = getTableName(tableId);
try {
boolean result = super.canDeleteTable(c, tableId, namespaceId);
- audit(c, result, CAN_DELETE_TABLE_AUDIT_TEMPLATE, tableName);
return result;
} catch (ThriftSecurityException ex) {
- audit(c, ex, CAN_DELETE_TABLE_AUDIT_TEMPLATE, tableName);
throw ex;
}
}
Why drop the tableId here?
String tableName = getTableName(tableId);
try {
boolean result = super.canDeleteTable(c, tableId, namespaceId);
+ audit(c, result, CAN_DELETE_TABLE_AUDIT_TEMPLATE, tableName, tableId);
return result;
} catch (ThriftSecurityException ex) {
+ audit(c, ex, CAN_DELETE_TABLE_AUDIT_TEMPLATE, tableName, tableId);
throw ex;
}
} |
codereview_java_data_11183 | }
protected String getFullQualifiedTableName(String tableName) {
- final List<String> levels = new ArrayList<>(Arrays.asList(icebergNamespace.levels()));
levels.add(tableName);
return Joiner.on('.').join(levels);
}
Nit: in apache iceberg, we usually use the unified `Lists.newArrayLists` to create a new ArrayList.
}
protected String getFullQualifiedTableName(String tableName) {
+ final List<String> levels = Lists.newArrayList(icebergNamespace.levels());
levels.add(tableName);
return Joiner.on('.').join(levels);
} |
codereview_java_data_11184 | boolean active = true;
if (propsFile.exists()) {
try {
- List<String> lines = Files.newBufferedReader(propsFile.toPath()).lines().collect(
- Collectors.toList());
for (String line : lines) {
if (line.equals("# linter:ON")) {
active = true;
you're leaking the handle here. You can just do `Files#readAllLines`
boolean active = true;
if (propsFile.exists()) {
try {
+ List<String> lines = Files.readAllLines(propsFile.toPath());
for (String line : lines) {
if (line.equals("# linter:ON")) {
active = true; |
codereview_java_data_11198 | private void reportHeartbeatError(PhysicalDbInstance ins) throws IOException {
final DbInstanceConfig config = ins.getConfig();
- String heartbeatError = "the dbInstance[" + config.getUrl() + "] can't reach. Please check the dbInstance status";
if (dbGroupConfig.isShowSlaveSql()) {
- heartbeatError += " and the privileges of user ( NOTE:heartbeat[show slave status] need grant the SUPER or REPLICATION CLIENT privilege(s) to db user,and then restart the dble.)";
}
LOGGER.warn(heartbeatError);
Map<String, String> labels = AlertUtil.genSingleLabel("dbInstance", dbGroupConfig.getName() + "-" + config.getInstanceName());
or fresh conn pool?
private void reportHeartbeatError(PhysicalDbInstance ins) throws IOException {
final DbInstanceConfig config = ins.getConfig();
+ String heartbeatError = "the dbInstance[" + config.getUrl() + "] can't reach. Please check the dbInstance is accessible";
if (dbGroupConfig.isShowSlaveSql()) {
+ heartbeatError += " and the privileges of user is sufficient (NOTE:heartbeat[show slave status] need grant the SUPER or REPLICATION CLIENT privilege(s) to db user,and then restart the dble or fresh conn).";
}
LOGGER.warn(heartbeatError);
Map<String, String> labels = AlertUtil.genSingleLabel("dbInstance", dbGroupConfig.getName() + "-" + config.getInstanceName()); |
codereview_java_data_11200 | package org.apache.fineract.notification;
-import edu.emory.mathcs.backport.java.util.Collections;
import org.apache.fineract.notification.domain.Notification;
import org.apache.fineract.notification.domain.NotificationMapper;
import org.apache.fineract.notification.service.NotificationGeneratorReadRepositoryWrapper;
just regular `java.util.Collections` would be better than `edu.emory`
package org.apache.fineract.notification;
import org.apache.fineract.notification.domain.Notification;
import org.apache.fineract.notification.domain.NotificationMapper;
import org.apache.fineract.notification.service.NotificationGeneratorReadRepositoryWrapper; |
codereview_java_data_11213 | if (config.networked && config.big) {
return;
}
-// reconnectOften = !config.memory && config.big;
testScript("testScript.sql");
testScript("comments.sql");
I'd rather leave this in and split the script files whereever we change mode
if (config.networked && config.big) {
return;
}
+ reconnectOften = !config.memory && config.big;
testScript("testScript.sql");
testScript("comments.sql"); |
codereview_java_data_11221 | createRecycleAdapter(recyclerView, emptyView);
}
- if(feedItemFilter.getValues().length > 0) {
txtvInformation.setText("{fa-info-circle} " + this.getString(R.string.filtered_label));
Iconify.addIcons(txtvInformation);
txtvInformation.setVisibility(View.VISIBLE);
Please add a space between `if` and `(`. Same for other lines in this PR. I know that AntennaPod does not have a consistent code style but I want to make sure that at least the new code is consistent. (changing the code style for the whole project at once would make git history and git blame ugly)
createRecycleAdapter(recyclerView, emptyView);
}
+ if (feedItemFilter.getValues().length > 0) {
txtvInformation.setText("{fa-info-circle} " + this.getString(R.string.filtered_label));
Iconify.addIcons(txtvInformation);
txtvInformation.setVisibility(View.VISIBLE); |
codereview_java_data_11222 | public String getCipherSuiteName();
/**
- * Returns the language code that should be used for this connection
- * (e.g. "en").
*
- * @return the language code for the connection.
*/
Locale getLanguage();
}
\ No newline at end of file
Might prefer "session" here rather than "connection" (also below), but not a blocker.
public String getCipherSuiteName();
/**
+ * Returns the locale that is used for this session (e.g. {@link Locale#ENGLISH}).
*
+ * @return The language for the session.
*/
Locale getLanguage();
}
\ No newline at end of file |
codereview_java_data_11230 | public void hideKeyboard(View view) {
- Log.i("hide", "hideKeyboard: ");
InputMethodManager inputMethodManager =(InputMethodManager)this.getSystemService(Activity.INPUT_METHOD_SERVICE);
inputMethodManager.hideSoftInputFromWindow(view.getWindowToken(), 0);
}
Please use Timber for logging. :)
public void hideKeyboard(View view) {
InputMethodManager inputMethodManager =(InputMethodManager)this.getSystemService(Activity.INPUT_METHOD_SERVICE);
inputMethodManager.hideSoftInputFromWindow(view.getWindowToken(), 0);
} |
codereview_java_data_11246 | if (reader != null)
reader.close();
}
-
- notifyConversationListListeners();
- return false;
}
public static interface ProgressListener {
move this up into the if block
if (reader != null)
reader.close();
}
}
public static interface ProgressListener { |
codereview_java_data_11249 | return true;
}
- private void attemptAuthorization(MethodInvocation mi) {
- this.logger.debug(LogMessage.of(() -> "Authorizing method invocation " + mi));
- AuthorizationDecision decision = this.authorizationManager.check(AUTHENTICATION_SUPPLIER, mi);
- if (decision != null && !decision.isGranted()) {
- this.logger.debug(LogMessage.of(() -> "Failed to authorize " + mi + " with authorization manager "
- + this.authorizationManager + " and decision " + decision));
- throw new AccessDeniedException("Access Denied");
- }
- this.logger.debug(LogMessage.of(() -> "Authorized method invocation " + mi));
- }
-
}
I think there's value in a trace log here stating that you are about to authorize, similar to what's in `AbstractSecurityInterceptor`.
return true;
}
} |
codereview_java_data_11262 | * <p>
* Optional. Set a bundle to customize UI
* </p>
- * @param mExtraCustomBundle Optional: Pass to chrome custom tab a bundle of customization flags
* @return A reference to this builder.
*/
@SuppressWarnings("checkstyle:hiddenfield")
- public Builder setExtraCustomBundle(final Bundle mExtraCustomBundle) {
- this.mExtraCustomBundle = mExtraCustomBundle;
return this;
}
I would suggest something like `setCustomTabsExtras`
* <p>
* Optional. Set a bundle to customize UI
* </p>
+ * @param mCustomTabsExtras Optional: Pass to chrome custom tab a bundle of customization flags
* @return A reference to this builder.
*/
@SuppressWarnings("checkstyle:hiddenfield")
+ public Builder setCustomTabsExtras(final Bundle mCustomTabsExtras) {
+ this.mCustomTabsExtras = mCustomTabsExtras;
return this;
} |
codereview_java_data_11272 | * no-op.
*
* @param createServiceFn the function that creates the service instance.
- * It must be stateless
* @return a copy of this factory with the supplied create-service-function
*/
@Nonnull
```suggestion * It must be stateless. ```
* no-op.
*
* @param createServiceFn the function that creates the service instance.
+ * It must be stateless.
* @return a copy of this factory with the supplied create-service-function
*/
@Nonnull |
codereview_java_data_11277 | @Override
public void actionPerformed(ActionEvent e) {
configuration.setHideAdvancedOptions(hideAdvancedOptions.isSelected());
- if (hideAdvancedOptions.isSelected())
looksFrame.setViewLevel(ViewLevel.NORMAL);
- else
looksFrame.setViewLevel(ViewLevel.ADVANCED);
}
});
builder.add(GuiUtil.getPreferredSizeComponent(hideAdvancedOptions), FormLayoutUtil.flip(cc.xyw(1, ypos, 9), colSpec, orientation));
@Nadahar curly braces please :)
@Override
public void actionPerformed(ActionEvent e) {
configuration.setHideAdvancedOptions(hideAdvancedOptions.isSelected());
+ if (hideAdvancedOptions.isSelected()) {
looksFrame.setViewLevel(ViewLevel.NORMAL);
+ } else {
looksFrame.setViewLevel(ViewLevel.ADVANCED);
+ }
}
});
builder.add(GuiUtil.getPreferredSizeComponent(hideAdvancedOptions), FormLayoutUtil.flip(cc.xyw(1, ypos, 9), colSpec, orientation)); |
codereview_java_data_11282 | import com.intellij.psi.PsiElement;
import com.intellij.psi.util.PsiTreeUtil;
-/**
- * Additional syntax highlighting, based on parsed PSI elements.
- */
public class HighlightingAnnotator extends BuildAnnotator {
@Override
Nit: Please revert this new formatting. We're using the latest version of google-java-format, which would revert this line the next time we touch this file.
import com.intellij.psi.PsiElement;
import com.intellij.psi.util.PsiTreeUtil;
+/** Additional syntax highlighting, based on parsed PSI elements. */
public class HighlightingAnnotator extends BuildAnnotator {
@Override |
codereview_java_data_11283 | }
return result;
}
-
-// private static Playable createRemoteMediaInstance(SharedPreferences pref) {
-// //TODO there's probably no point in restoring RemoteMedia from preferences, because we
-// //only care about it while it's playing on the cast device.
-// return null;
-// }
}
class PlayableException extends Exception {
Delete? It's in the git history, anyway.
}
return result;
}
}
class PlayableException extends Exception { |
codereview_java_data_11286 | }
public boolean isNetworkProviderEnabled() {
- return locationManager.isProviderEnabled(String.valueOf(R.string.network_provider));
}
public boolean isGPSProviderEnabled() {
- return locationManager.isProviderEnabled(String.valueOf(R.string.gps_provider));
}
public enum LocationChangeType{
These should be constants or enums
}
public boolean isNetworkProviderEnabled() {
+ return locationManager.isProviderEnabled(LocationManager.NETWORK_PROVIDER);
}
public boolean isGPSProviderEnabled() {
+ return locationManager.isProviderEnabled(LocationManager.GPS_PROVIDER);
}
public enum LocationChangeType{ |
codereview_java_data_11292 | }
private static void deleteFileIfExists(File to) throws IOException {
- boolean fileDoesNotExist = to.exists();
if (fileDoesNotExist) {
return;
}
There seems to be a `!` missing.
}
private static void deleteFileIfExists(File to) throws IOException {
+ boolean fileDoesNotExist = !to.exists();
if (fileDoesNotExist) {
return;
} |
codereview_java_data_11324 | return true;
}
if (putObjectRequest.getSSEAwsKeyManagementParams() != null &&
- putObjectRequest.getSSEAwsKeyManagementParams().getAwsKmsKeyId() != null) {
return true;
}
} else if (request instanceof UploadPartRequest) {
Include check for getEncryption as well as key is optional. ( putObjectRequest.getSSEAwsKeyManagementParams() != null && ( putObjectRequest.getSSEAwsKeyManagementParams().getAwsKmsKeyId() != null || putObjectRequest.getSSEAwsKeyManagementParams.getEncryption() != null ) )
return true;
}
if (putObjectRequest.getSSEAwsKeyManagementParams() != null &&
+ (putObjectRequest.getSSEAwsKeyManagementParams().getEncryption() != null ||
+ putObjectRequest.getSSEAwsKeyManagementParams().getAwsKmsKeyId() != null)) {
return true;
}
} else if (request instanceof UploadPartRequest) { |
codereview_java_data_11338 | "FALSE" : "TRUE");
add(rows, "MODE", database.getMode().getName());
add(rows, "MULTI_THREADED", database.isMultiThreaded() ? "1" : "0");
- add(rows, "MVCC", database.isMVStore() ? "TRUE" : "FALSE");
add(rows, "QUERY_TIMEOUT", Integer.toString(session.getQueryTimeout()));
add(rows, "RETENTION_TIME", Integer.toString(database.getRetentionTime()));
add(rows, "LOG", Integer.toString(database.getLogMode()));
fairly sure we can drop this line of code
"FALSE" : "TRUE");
add(rows, "MODE", database.getMode().getName());
add(rows, "MULTI_THREADED", database.isMultiThreaded() ? "1" : "0");
add(rows, "QUERY_TIMEOUT", Integer.toString(session.getQueryTimeout()));
add(rows, "RETENTION_TIME", Integer.toString(database.getRetentionTime()));
add(rows, "LOG", Integer.toString(database.getLogMode())); |
codereview_java_data_11342 | }
public static void calculateAllSpeeds() {
- for (Entry<InetAddress, RendererConfiguration> key : addressAssociation.entrySet()) {
- InetAddress sa = key.getKey();
if (sa.isLoopbackAddress() || sa.isAnyLocalAddress()) {
continue;
}
- RendererConfiguration r = key.getValue();
if (!r.isOffline()) {
SpeedStats.getInstance().getSpeedInMBits(sa, r.getRendererName());
}
I'd say it's very confusing that the entry is called "key"
}
public static void calculateAllSpeeds() {
+ for (Entry<InetAddress, RendererConfiguration> entry : addressAssociation.entrySet()) {
+ InetAddress sa = entry.getKey();
if (sa.isLoopbackAddress() || sa.isAnyLocalAddress()) {
continue;
}
+ RendererConfiguration r = entry.getValue();
if (!r.isOffline()) {
SpeedStats.getInstance().getSpeedInMBits(sa, r.getRendererName());
} |
codereview_java_data_11348 | * @return a new module which imports the original user module and a set of marker modules.
*/
public Module getConfigGrammar(Module mod) {
- // import CONFIG-CELLS in order to parse cells specific to configurations
- Module newM = new Module( mod.name() + "-" + CONFIG_CELLS
- , (scala.collection.Set<Module>) mod.publicImports().$bar(Set(baseK.getModule(K).get(), baseK.getModule(CONFIG_CELLS).get(), baseK.getModule(DEFAULT_LAYOUT).get()))
- , mod.privateImports()
- , mod.localSentences()
- , mod.att()
- );
- return newM;
}
/**
This pattern than `getConfigGrammar` uses seems like it could be extracted into a single function `getGrammar`, which takes as input `Module mod, String cellName`, then `getConfigGrammar(Module mod) => getGramma(mod, mod.name() + "-" + CONFIG_CELLS)`, for instance. That would deduplicate `getRuleGrammar` and `getConfigGrammar`
* @return a new module which imports the original user module and a set of marker modules.
*/
public Module getConfigGrammar(Module mod) {
+ return getGrammar(mod, CONFIG_CELLS);
}
/** |
codereview_java_data_11356 | .transportFactory(new TTransportFactory())
.protocolFactory(new TBinaryProtocol.Factory())
.minWorkerThreads(3)
- .maxWorkerThreads(8);
return new TThreadPoolServer(args);
}
Why was this change required?
.transportFactory(new TTransportFactory())
.protocolFactory(new TBinaryProtocol.Factory())
.minWorkerThreads(3)
+ .maxWorkerThreads(10);
return new TThreadPoolServer(args);
} |
codereview_java_data_11360 | @Override
protected void validate(TableMetadata base) {
}
}
This should not be removed. There is a valid use case for the operation to check whether there are conflicts instead of re-sequencing data files. If you want, you can add a configuration method to enable/disable the validation.
@Override
protected void validate(TableMetadata base) {
+ if (replacedDataFiles.size() > 0) {
+ // if there are replaced data files, there cannot be any new row-level deletes for those data files
+ validateNoNewDeletesForDataFiles(base, startingSnapshotId, replacedDataFiles);
+ }
}
} |
codereview_java_data_11363 | * </pre>
*/
public class ImapStore extends RemoteStore {
- public static final ServerSettings.Type STORE_TYPE = ServerSettings.Type.IMAP;
private static final int IDLE_READ_TIMEOUT_INCREMENT = 5 * 60 * 1000;
private static final int IDLE_FAILURE_COUNT_LIMIT = 10;
With the introduction of the enum(s) the constants in the store classes should be removed.
* </pre>
*/
public class ImapStore extends RemoteStore {
private static final int IDLE_READ_TIMEOUT_INCREMENT = 5 * 60 * 1000;
private static final int IDLE_FAILURE_COUNT_LIMIT = 10; |
codereview_java_data_11366 | public static final FamilyOperandTypeChecker STRING_STRING_STRING =
family(SqlTypeFamily.STRING, SqlTypeFamily.STRING, SqlTypeFamily.STRING);
- public static final SqlSingleOperandTypeChecker STRING_STRING_OPTIONAL_STRING =
family(ImmutableList.of(SqlTypeFamily.STRING, SqlTypeFamily.STRING, SqlTypeFamily.STRING),
// Third operand optional (operand index 0, 1, 2)
number -> number == 2);
The type should be `FamilyOperandTypeChecker`.
public static final FamilyOperandTypeChecker STRING_STRING_STRING =
family(SqlTypeFamily.STRING, SqlTypeFamily.STRING, SqlTypeFamily.STRING);
+ public static final FamilyOperandTypeChecker STRING_STRING_OPTIONAL_STRING =
family(ImmutableList.of(SqlTypeFamily.STRING, SqlTypeFamily.STRING, SqlTypeFamily.STRING),
// Third operand optional (operand index 0, 1, 2)
number -> number == 2); |
codereview_java_data_11368 | if (reportOn(target)) {
MetricOptions options = MetricOptions.ofOptions(getProperty(optionsDescriptor));
N reportLevel = parseReportLevel(getProperty(reportLevelDescriptor));
- N result = Metric.compute(metric, options, target);
if (result != null && reportLevel.compareTo(result) <= 0) {
addViolationWithMessage(ctx, target, violationMessage(target, result));
I think, that class can be probably moved to pmd-test module under src/main/java.
if (reportOn(target)) {
MetricOptions options = MetricOptions.ofOptions(getProperty(optionsDescriptor));
N reportLevel = parseReportLevel(getProperty(reportLevelDescriptor));
+ N result = Metric.compute(metric, target, options);
if (result != null && reportLevel.compareTo(result) <= 0) {
addViolationWithMessage(ctx, target, violationMessage(target, result)); |
codereview_java_data_11371 | if (clipboardManager != null) {
clipboardManager.setPrimaryClip(ClipData.newPlainText("AntennaPod", text));
}
- Toast.makeText(requireContext(), getResources().getString(R.string.copied_to_clipboard), Toast.LENGTH_SHORT).show();
return true;
}
}
Please use a Snackbar instead to be consistent with other parts of the app. We have a method called `((MainActivity) getActivity()).showSnackbarAbovePlayer` for that.
if (clipboardManager != null) {
clipboardManager.setPrimaryClip(ClipData.newPlainText("AntennaPod", text));
}
+ ((MainActivity) requireActivity()).showSnackbarAbovePlayer(getResources().getString(R.string.copied_to_clipboard), Snackbar.LENGTH_SHORT);
return true;
}
} |
codereview_java_data_11373 | import org.springframework.cloud.deployer.spi.local.LocalDeployerAutoConfiguration;
/**
- * Bootstrap class for the local Spring Cloud Data Flow Server.
*
* @author Mark Fisher
* @author Ilayaperumal Gopinathan
We can get rid of `local` and say something about the `single` server.
import org.springframework.cloud.deployer.spi.local.LocalDeployerAutoConfiguration;
/**
+ * Bootstrap class for the Spring Cloud Data Flow Server.
*
* @author Mark Fisher
* @author Ilayaperumal Gopinathan |
codereview_java_data_11380 | * the Hadoop context for the configured job
* @param scanner
* the scanner to configure
- * @since 1.6.0
*/
@Override
- protected void setupIterators(TaskAttemptContext context, ScannerBase scanner, String tableName, org.apache.accumulo.core.client.mapreduce.AccumuloInputSplit split) {
setupIterators(context, scanner, split);
}
Sorry, we can't change these as they'd break API from 1.6.0. We can add to the API for 1.7.0, but we can't remove. Need to deprecate the ones that take Scanner/RangeInputSplit and add in a new one for ScannerBase/AccumuloInputSplit.
* the Hadoop context for the configured job
* @param scanner
* the scanner to configure
+ * @since 1.7.0
*/
@Override
+ protected void setupIterators(TaskAttemptContext context, ScannerBase scanner, String tableName, AccumuloInputSplit split) {
setupIterators(context, scanner, split);
} |
codereview_java_data_11383 | public void onPause() {
super.onPause();
savePreference();
-
}
private void savePreference() {
Please revert non-code changes to keep the git history clean
public void onPause() {
super.onPause();
savePreference();
+ webvDescription.scrollTo(0,0);
}
private void savePreference() { |
codereview_java_data_11385 | * A way for plugins to quickly save a call that they will need to reference
* between activity/permissions starts/requests
*
- * @deprecated use {@link #savedLastCallId} instead in conjunction with bridge methods
* {@link com.getcapacitor.Bridge#saveCall(PluginCall)},
* {@link com.getcapacitor.Bridge#getSavedCall(String)} and
* {@link com.getcapacitor.Bridge#releaseCall(PluginCall)}
Do you think this still promotes an undesirable workflow where plugin devs are using a single variable to store multiple call IDs? I am almost of the opinion the plugin dev should be fully responsible for managing their call IDs. What do you think?
* A way for plugins to quickly save a call that they will need to reference
* between activity/permissions starts/requests
*
+ * @deprecated store calls on the bridge using the methods
* {@link com.getcapacitor.Bridge#saveCall(PluginCall)},
* {@link com.getcapacitor.Bridge#getSavedCall(String)} and
* {@link com.getcapacitor.Bridge#releaseCall(PluginCall)} |
codereview_java_data_11390 | var compactingFiles =
compacting.stream().flatMap(job -> job.getFiles().stream()).collect(Collectors.toSet());
Preconditions.checkArgument(this.allFiles.containsAll(compactingFiles),
- "Compacting not in set of all files %s, compacting files %s", this.allFiles,
compactingFiles);
Preconditions.checkArgument(Collections.disjoint(compactingFiles, this.candidates),
"Compacting and candidates overlap %s %s", compactingFiles, this.candidates);
```suggestion "Compacting not in set of all files: %s, compacting files: %s", this.allFiles, ```
var compactingFiles =
compacting.stream().flatMap(job -> job.getFiles().stream()).collect(Collectors.toSet());
Preconditions.checkArgument(this.allFiles.containsAll(compactingFiles),
+ "Compacting not in set of all files: %s, compacting files: %s", this.allFiles,
compactingFiles);
Preconditions.checkArgument(Collections.disjoint(compactingFiles, this.candidates),
"Compacting and candidates overlap %s %s", compactingFiles, this.candidates); |
codereview_java_data_11396 | import org.springframework.boot.ansi.AnsiStyle;
import org.springframework.core.env.Environment;
import org.springframework.core.io.ClassPathResource;
-import org.springframework.core.io.Resource;
import org.springframework.util.StreamUtils;
import static java.nio.charset.StandardCharsets.UTF_8;
Can the banner be printed twice? I guess it can be a local. Or otherwise static
import org.springframework.boot.ansi.AnsiStyle;
import org.springframework.core.env.Environment;
import org.springframework.core.io.ClassPathResource;
import org.springframework.util.StreamUtils;
import static java.nio.charset.StandardCharsets.UTF_8; |
codereview_java_data_11409 | // Built-in checks
"ArrayEquals",
"MissingOverride",
- "MutableConstantField",
"UnusedMethod",
"UnusedVariable");
Annoyingly the auto-fix for this actually results in quite a lot of lines going over the 120 char limit we have... Given that this means more manual actions to merge one of these PRs (and might block baseline upgrades), could we actually just drop this one for now and keep the others? I tried it out locally by adding: ```gradle plugins.withId('com.palantir.baseline-error-prone') { plugins.withId('java') { baselineErrorProne { patchChecks.add('MutableConstantField') } } } ``` And then running `./gradlew classes -PerrorProneApply`
// Built-in checks
"ArrayEquals",
"MissingOverride",
"UnusedMethod",
"UnusedVariable"); |
codereview_java_data_11418 | if (deploy.isPresent() && deploy.get().getRunImmediately().isPresent()) {
String requestId = deploy.get().getRequestId();
SingularityRunNowRequest runNowRequest = deploy.get().getRunImmediately().get();
List<SingularityTaskId> activeTasks = taskManager.getActiveTaskIdsForRequest(requestId);
List<SingularityPendingTaskId> pendingTasks = taskManager.getPendingTaskIdsForRequest(requestId);
SingularityPendingRequestBuilder builder = new SingularityPendingRequestBuilder()
.setRequestId(requestId)
- .setDeployId(deploy.get().getId())
.setTimestamp(deployResult.getTimestamp())
.setUser(pendingDeploy.getDeployMarker().getUser())
.setCmdLineArgsList(runNowRequest.getCommandLineArgs())
Should we be handling the case where pendingType == null here?
if (deploy.isPresent() && deploy.get().getRunImmediately().isPresent()) {
String requestId = deploy.get().getRequestId();
+ String deployId = deploy.get().getId();
SingularityRunNowRequest runNowRequest = deploy.get().getRunImmediately().get();
List<SingularityTaskId> activeTasks = taskManager.getActiveTaskIdsForRequest(requestId);
List<SingularityPendingTaskId> pendingTasks = taskManager.getPendingTaskIdsForRequest(requestId);
SingularityPendingRequestBuilder builder = new SingularityPendingRequestBuilder()
.setRequestId(requestId)
+ .setDeployId(deployId)
.setTimestamp(deployResult.getTimestamp())
.setUser(pendingDeploy.getDeployMarker().getUser())
.setCmdLineArgsList(runNowRequest.getCommandLineArgs()) |
codereview_java_data_11421 | }
}
} finally {
SingletonManager.setMode(Mode.CLOSED);
}
It might be useful to keep the option, so it doesn't break scripts, even if it is now a noop.
}
}
+ // Remove the tracers, we don't use them anymore.
+ @SuppressWarnings("deprecation")
+ String path = siteConf.get(Property.TRACE_ZK_PATH);
+ try {
+ zapDirectory(zoo, path, opts);
+ } catch (Exception e) {
+ // do nothing if the /tracers node does not exist.
+ }
} finally {
SingletonManager.setMode(Mode.CLOSED);
} |
codereview_java_data_11424 | @Override
public Row lockRow(Session session, Row row) {
- syncLastModificationIdWithDatabase();
- return primaryIndex.lockRow(session, row);
}
private void analyzeIfRequired(Session session) {
Taking a lock is not a data modification, so why do we need it here? Taking a lock should not prevent anybody from using cached result of a previous query etc.
@Override
public Row lockRow(Session session, Row row) {
+ Row lockedRow = primaryIndex.lockRow(session, row);
+ if (lockedRow == null || !row.hasSharedData(lockedRow)) {
+ syncLastModificationIdWithDatabase();
+ }
+ return lockedRow;
}
private void analyzeIfRequired(Session session) { |
codereview_java_data_11426 | * @return {@code this}.
*/
@java.lang.SuppressWarnings("all")
- public SarifLog.PropertyBag.PropertyBagBuilder tags(final String[] tags) {
this.tags = tags;
return this;
}
this is for instance one of the unpleasant things with arrays, the toString will print `[Ljava.lang.String;@1234567`
* @return {@code this}.
*/
@java.lang.SuppressWarnings("all")
+ public SarifLog.PropertyBag.PropertyBagBuilder tags(final Set<String> tags) {
this.tags = tags;
return this;
} |
codereview_java_data_11432 | for (int i = 0; i < union.getInputs().size(); i++) {
RelNode input = union.getInput(i);
List<Pair<RexNode, String>> newChildExprs = new ArrayList<>();
- for (int j = 0; j < refsIndex.cardinality(); j++) {
- int pos = refsIndex.nth(j);
newChildExprs.add(
- Pair.<RexNode, String>of(rexBuilder.makeInputRef(input, pos),
- input.getRowType().getFieldList().get(pos).getName()));
}
if (newChildExprs.isEmpty()) {
// At least a single item in project is required.
I think `count == 1` is wrong. What if there were two constant columns, `select 1, 2 ... union all select 1, 2 ...`.
for (int i = 0; i < union.getInputs().size(); i++) {
RelNode input = union.getInput(i);
List<Pair<RexNode, String>> newChildExprs = new ArrayList<>();
+ for (int j : refsIndex) {
newChildExprs.add(
+ Pair.<RexNode, String>of(rexBuilder.makeInputRef(input, j),
+ input.getRowType().getFieldList().get(j).getName()));
}
if (newChildExprs.isEmpty()) {
// At least a single item in project is required. |
codereview_java_data_11433 | package net.sourceforge.pmd.lang.metrics;
import net.sourceforge.pmd.lang.ast.QualifiableNode;
import net.sourceforge.pmd.lang.ast.SignedNode;
import net.sourceforge.pmd.lang.metrics.api.Metric.Version;
You could use for these null checks (key + option) the utility method java.util.Objects.requireNonNull, e.g. Objects.requireNonNull(key, "The metric key must not be null"); Objects.requireNonNull(option, "The result option must not be null");
package net.sourceforge.pmd.lang.metrics;
+import java.util.Objects;
+
import net.sourceforge.pmd.lang.ast.QualifiableNode;
import net.sourceforge.pmd.lang.ast.SignedNode;
import net.sourceforge.pmd.lang.metrics.api.Metric.Version; |
codereview_java_data_11447 | delete -> openDeletes(delete, deleteSchema));
StructLikeSet deleteSet = Deletes.toEqualitySet(
CloseableIterable.transform(
- CloseableIterable.concat(deleteRecords, ThreadPools.getWorkerPool(), ThreadPools.getPoolParallelism()),
deleteRecord -> (StructLike) deleteRecord
),
deleteSchema.asStruct());
did this PR actually change or increase the parallelism? it seems to me we are still using the default parallelism of `2 * ThreadPools.WORKER_THREAD_POOL_SIZE`. Is the purpose of this PR mainly refactoring the API and supporting different parallelism (if needed in the future)?
delete -> openDeletes(delete, deleteSchema));
StructLikeSet deleteSet = Deletes.toEqualitySet(
CloseableIterable.transform(
+ CloseableIterable.combine(deleteRecords, readService, readParallelism),
deleteRecord -> (StructLike) deleteRecord
),
deleteSchema.asStruct()); |
codereview_java_data_11448 | .build();
}
- @SdkInternalApi
- public DefaultBatchManagerTestAsyncBatchManager(BatchManagerTestAsyncClient client,
- BatchManager<SendRequestRequest, SendRequestResponse, SendRequestBatchResponse> sendRequestBatchManager,
- BatchManager<DeleteRequestRequest, DeleteRequestResponse, DeleteRequestBatchResponse> deleteRequestBatchManager) {
this.sendRequestBatchManager = sendRequestBatchManager;
this.deleteRequestBatchManager = deleteRequestBatchManager;
this.client = client;
Can we make it package private?
.build();
}
+ DefaultBatchManagerTestAsyncBatchManager(BatchManagerTestAsyncClient client,
+ BatchManager<SendRequestRequest, SendRequestResponse, SendRequestBatchResponse> sendRequestBatchManager,
+ BatchManager<DeleteRequestRequest, DeleteRequestResponse, DeleteRequestBatchResponse> deleteRequestBatchManager) {
this.sendRequestBatchManager = sendRequestBatchManager;
this.deleteRequestBatchManager = deleteRequestBatchManager;
this.client = client; |
codereview_java_data_11450 | if (keyword != null) {
CheckBox cbVis = (CheckBox) convertView.findViewById(
R.id.checkbox_keyword_visibility);
- cbVis.setChecked(keyword.isVisible());
cbVis.setTag(position);
cbVis.setOnCheckedChangeListener(
new CompoundButton.OnCheckedChangeListener()
{
This method is way too long. It's a method for a class in a class that and contains multiple anonymous classes.
if (keyword != null) {
CheckBox cbVis = (CheckBox) convertView.findViewById(
R.id.checkbox_keyword_visibility);
cbVis.setTag(position);
+ cbVis.setChecked(keyword.isVisible());
cbVis.setOnCheckedChangeListener(
new CompoundButton.OnCheckedChangeListener()
{ |
codereview_java_data_11461 | dataType = DataTypeResolver.fromType(itemDefinition.getStructureRef(), cl);
}
variable.setType(dataType);
- if(defaultValue != null) {
- variable.setValue(dataType.verifyDataType(defaultValue) ? defaultValue : dataType.readValue((String)defaultValue));
- }
}
}
the null check is not needed
dataType = DataTypeResolver.fromType(itemDefinition.getStructureRef(), cl);
}
variable.setType(dataType);
+ variable.setValue(dataType.verifyDataType(defaultValue) ? defaultValue : dataType.readValue((String) defaultValue));
}
} |
codereview_java_data_11467 | // Make sure we are not null
if (analysisCache == null || isIgnoreIncrementalAnalysis() && isAnalysisCacheFunctional()) {
// sets a noop cache
- setAnalysisCache(null);
}
return analysisCache;
passing null here was a lousy hack to avoid double logging of the warning (as it was in the constructor). I'd take this chance to initialize the `analysisCache` right upon definition and skip this logic in the getter altogether.
// Make sure we are not null
if (analysisCache == null || isIgnoreIncrementalAnalysis() && isAnalysisCacheFunctional()) {
// sets a noop cache
+ setAnalysisCache(new NoopAnalysisCache());
}
return analysisCache; |
codereview_java_data_11472 | return !operands.get(0).getType().isNullable();
case IS_TRUE:
case IS_NOT_FALSE:
- case MAX:
- case MIN:
return operands.get(0).isAlwaysTrue();
case NOT:
return operands.get(0).isAlwaysFalse();
I think this is a little bit too much...it will work; but a single isAlwaysTrue() call will do a full visit on the subtree... As a public method I would think about it as something which takes O(1) time... I think these cases should be handled by RexSimplify
return !operands.get(0).getType().isNullable();
case IS_TRUE:
case IS_NOT_FALSE:
return operands.get(0).isAlwaysTrue();
case NOT:
return operands.get(0).isAlwaysFalse(); |
codereview_java_data_11474 | public List<String> getVideoBitrateOptions(DLNAResource dlna, DLNAMediaInfo media, OutputParams params) {
List<String> videoBitrateOptions = new ArrayList<>();
boolean low = false;
- String customFFmpegOptions = renderer.getCustomFFmpegOptions();
int defaultMaxBitrates[] = getVideoBitrateConfig(configuration.getMaximumBitrate());
int rendererMaxBitrates[] = new int[2];
@onon765trb `renderer` should be simply changed to `params.mediaRenderer`
public List<String> getVideoBitrateOptions(DLNAResource dlna, DLNAMediaInfo media, OutputParams params) {
List<String> videoBitrateOptions = new ArrayList<>();
boolean low = false;
+ String customFFmpegOptions = params.mediaRenderer.getCustomFFmpegOptions();
int defaultMaxBitrates[] = getVideoBitrateConfig(configuration.getMaximumBitrate());
int rendererMaxBitrates[] = new int[2]; |
codereview_java_data_11478 | new String[]{"Joey", "3"}
);
- assertRowsAnyOrder(
"SELECT name FROM " + tableName + " LIMIT 1",
- singletonList(new Row("Alice"))
);
}
The test works because it executes on IMDG engine :)
new String[]{"Joey", "3"}
);
+ assertContainsOnlyOneOfRows(
"SELECT name FROM " + tableName + " LIMIT 1",
+ new Row("Alice"), new Row("Bob"), new Row("Joey")
);
} |
codereview_java_data_11479 | * database snapshotting should not be repeated and streaming the binlog
* should resume at the position where it left off. If the state is
* reset, then the source will behave as if it were its initial start,
- * so will do a database snapshot and will start trailing the binlog
* where it syncs with the database snapshot's end.
*/
@Nonnull
This will break the guarantee. If ex-once, you MUST NOT commit the offsets before the phase 2. My proposal is to only use this period in non-snapshotted jobs. Negative value should be disallowed. Zero means committing after each batch. I assume that in databases which don't support committing of offsets the commit operation is no-op. If not, then we can use negative values to eke out a bit of performance.
* database snapshotting should not be repeated and streaming the binlog
* should resume at the position where it left off. If the state is
* reset, then the source will behave as if it were its initial start,
+ * so will do a database snapshot and will start tailing the binlog
* where it syncs with the database snapshot's end.
*/
@Nonnull |
codereview_java_data_11481 | * occurring.
*/
public int getPort() {
- checkState(hasPort(), "The given address does not include a port");
return port;
}
Just a minor wording suggestion: ```suggestion checkState(hasPort(), "the address does not include a port"); ``` The reason for this is that the exception could also occur when the user has not "given" any address. For example, if it's our own internal code. In that case, the word "given" might be confusing.
* occurring.
*/
public int getPort() {
+ checkState(hasPort(), "the address does not include a port");
return port;
} |
codereview_java_data_11486 | if (file.format().isSplittable()) {
return () -> new SplitScanTaskIterator(splitSize, this);
} else {
- return Lists.newArrayList(this);
}
}
I think it would be better to use `ImmutableList.of(this)`. It is good to return immutable objects even though this probably won't be mutated. And that `ImmutableList` can use an implementation that is more efficient than `ArrayList` because it knows that there is only going to be one item.
if (file.format().isSplittable()) {
return () -> new SplitScanTaskIterator(splitSize, this);
} else {
+ return ImmutableList.of(this);
}
} |
codereview_java_data_11490 | */
static <T> Iterator<T> tabulate(int n, Function<? super Integer, ? extends T> f) {
Objects.requireNonNull(f, "f is null");
- return Collections.tabulate(n, f, Iterator.empty(), Iterator::of);
}
/**
here we should use the lazy `Collections.tabulate(n, f)` method
*/
static <T> Iterator<T> tabulate(int n, Function<? super Integer, ? extends T> f) {
Objects.requireNonNull(f, "f is null");
+ return Collections.tabulate(n, f);
}
/** |
codereview_java_data_11491 | TimeType timeType = TimeType.valueOf(ByteBufferUtil.toString(arguments.get(1)));
InitialTableState initialTableState =
InitialTableState.valueOf(ByteBufferUtil.toString(arguments.get(2)));
- log.info("Init Table State: " + ByteBufferUtil.toString(arguments.get(2)));
int splitCount = Integer.parseInt(ByteBufferUtil.toString(arguments.get(3)));
validateArgumentCount(arguments, tableOp, SPLIT_OFFSET + splitCount);
String splitFile = null;
Was this added for debugging?
TimeType timeType = TimeType.valueOf(ByteBufferUtil.toString(arguments.get(1)));
InitialTableState initialTableState =
InitialTableState.valueOf(ByteBufferUtil.toString(arguments.get(2)));
int splitCount = Integer.parseInt(ByteBufferUtil.toString(arguments.get(3)));
validateArgumentCount(arguments, tableOp, SPLIT_OFFSET + splitCount);
String splitFile = null; |
codereview_java_data_11492 | }
@Test
- public void nonceInsertedToTransactionIsThatProvidedFromNonceProvider() {
final KeyPair signingKeys = KeyPair.generate();
final Address precompiledAddress = Address.fromHexString("1");
This test seems like it's doing more than just checking that inserted nonce is provided nonce. Perhaps something like createsPrivateMarkerTransactionUsingProvidedNonce or something like that?
}
@Test
+ public void createsFullyPopulatedPrivateMarkerTransactionUsingProvidedNonce() {
final KeyPair signingKeys = KeyPair.generate();
final Address precompiledAddress = Address.fromHexString("1"); |
codereview_java_data_11495 | if (useSkipper) {
streamDeploymentProperties.put(SKIPPER_ENABLED_PROPERTY_KEY, "true");
}
- defaultStreamService.deployStream(name, streamDeploymentProperties);
}
return new Assembler(new PageImpl<>(Collections.singletonList(stream))).toResource(stream);
}
Looks like order to create definition without deploying it, and then deploy later needs a use of `spring.cloud.dataflow.skipper.enabled` property. We prefixes `app.*` and `deployer.*` for deployment properties so this didn't work ``` stream deploy --name ticktock --properties "deployer.*.spring.cloud.dataflow.skipper.enabled=true" ``` instead you need to use ``` stream deploy --name ticktock --properties "spring.cloud.dataflow.skipper.enabled=true" ``` We need to go through those prefixes as that's the contract.
if (useSkipper) {
streamDeploymentProperties.put(SKIPPER_ENABLED_PROPERTY_KEY, "true");
}
+ this.streamService.deployStream(name, streamDeploymentProperties);
}
return new Assembler(new PageImpl<>(Collections.singletonList(stream))).toResource(stream);
} |
codereview_java_data_11506 | int packetType = s.startsWith("M-SEARCH") ? M_SEARCH : s.startsWith("NOTIFY") ? NOTIFY : 0;
boolean redundant = address.equals(lastAddress) && packetType == lastPacketType;
- if (configuration.getIpFiltering().allowed(address) && getDevice(address) != null) {
String remoteAddr = address.getHostAddress();
int remotePort = receivePacket.getPort();
if (packetType == M_SEARCH || packetType == NOTIFY) {
@valib this change breaks the functionality - `getDevice` means it will only work for devices we know about, but this is useful for discovering _unknown_ devices
int packetType = s.startsWith("M-SEARCH") ? M_SEARCH : s.startsWith("NOTIFY") ? NOTIFY : 0;
boolean redundant = address.equals(lastAddress) && packetType == lastPacketType;
+ // Is the request from our own server, i.e. self-originating?
+ boolean isSelf = address.getHostAddress().equals(PMS.get().getServer().getHost()) && s.contains("UMS/");
+ int uuidPosition = s.indexOf(UUID);
+ UDN udn = UDN.valueOf(s.substring(uuidPosition, s.indexOf(":", uuidPosition + UUID.length())));
+ Device<?, ?, ?> device = getDevice(udn);
+
+ if (configuration.getIpFiltering().allowed(address) && !isSelf && !isIgnoredDevice((RemoteDevice) device)) {
String remoteAddr = address.getHostAddress();
int remotePort = receivePacket.getPort();
if (packetType == M_SEARCH || packetType == NOTIFY) { |
codereview_java_data_11508 | actData = actArray.toArray(actData);
actSeries = new FixedLineGraphSeries<>(actData);
actSeries.setDrawBackground(false);
- actSeries.setColor(MainApp.gc(R.color.mdtp_white));
actSeries.setThickness(3);
actScale.setMultiplier(scale / 0.04d); //TODO for clarity should be fixed scale, but what max? For now 0.04d seems reasonable.
define new color indentifier (can be white)
actData = actArray.toArray(actData);
actSeries = new FixedLineGraphSeries<>(actData);
actSeries.setDrawBackground(false);
+ actSeries.setColor(MainApp.gc(R.color.activity));
actSeries.setThickness(3);
actScale.setMultiplier(scale / 0.04d); //TODO for clarity should be fixed scale, but what max? For now 0.04d seems reasonable. |
codereview_java_data_11521 | import org.flowable.engine.impl.persistence.entity.DeploymentEntity;
import org.flowable.engine.impl.repository.AddAsNewDeploymentMergeStrategy;
import org.flowable.engine.impl.repository.AddAsOldDeploymentMergeStrategy;
-import org.flowable.engine.repository.DeploymentMergeStrategy;
import org.flowable.engine.impl.repository.MergeByDateDeploymentMergeStrategy;
-import org.flowable.engine.repository.MergeMode;
import org.flowable.engine.impl.repository.VerifyDeploymentMergeStrategy;
import org.flowable.engine.impl.util.CommandContextUtil;
import org.flowable.engine.impl.util.Flowable5Util;
import org.flowable.engine.repository.Deployment;
import org.flowable.engine.repository.ProcessDefinition;
/**
It appears that the imports are not in alphabetic order.
import org.flowable.engine.impl.persistence.entity.DeploymentEntity;
import org.flowable.engine.impl.repository.AddAsNewDeploymentMergeStrategy;
import org.flowable.engine.impl.repository.AddAsOldDeploymentMergeStrategy;
import org.flowable.engine.impl.repository.MergeByDateDeploymentMergeStrategy;
import org.flowable.engine.impl.repository.VerifyDeploymentMergeStrategy;
import org.flowable.engine.impl.util.CommandContextUtil;
import org.flowable.engine.impl.util.Flowable5Util;
import org.flowable.engine.repository.Deployment;
+import org.flowable.engine.repository.DeploymentMergeStrategy;
+import org.flowable.engine.repository.MergeMode;
import org.flowable.engine.repository.ProcessDefinition;
/** |
codereview_java_data_11523 | .collect(Collectors.toList());
assertThat(result.size()).isEqualTo(1);
assertThat(result.get(0).getValue()).contains(BytesValue.of(1));
- // TODO: fix path representation
- // assertThat(result.get(0).getPath()).isEqualTo(BytesValue.fromHexString("0x100000"));
}
@Test
Does this need looking at still?
.collect(Collectors.toList());
assertThat(result.size()).isEqualTo(1);
assertThat(result.get(0).getValue()).contains(BytesValue.of(1));
+ BytesValue actualPath = CompactEncoding.pathToBytes(result.get(0).getPath());
+ assertThat(actualPath).isEqualTo(BytesValue.fromHexString("0x100000"));
}
@Test |
codereview_java_data_11524 | private final BlockHeader header =
TestHelpers.createCliqueSignedBlockHeader(headerBuilder, proposerKeys, validatorList);
- private final EpochManager epochManager = new EpochManager(10);
-
private final BlockHeaderBuilder builder =
BlockHeaderBuilder.fromHeader(headerBuilder.buildHeader())
.blockHashFunction(MainnetBlockHashFunction::createHash);
should this use the EPOCH_BLOCK variable?
private final BlockHeader header =
TestHelpers.createCliqueSignedBlockHeader(headerBuilder, proposerKeys, validatorList);
private final BlockHeaderBuilder builder =
BlockHeaderBuilder.fromHeader(headerBuilder.buildHeader())
.blockHashFunction(MainnetBlockHashFunction::createHash); |
codereview_java_data_11526 | */
package tech.pegasys.pantheon.consensus.common;
import static tech.pegasys.pantheon.consensus.common.VoteType.ADD;
import tech.pegasys.pantheon.ethereum.core.Address;
import java.util.Objects;
-import com.google.common.base.Preconditions;
-
public class ValidatorVote {
private final VoteType votePolarity;
nit: Can we static import `checkNotNull` please? Just to keep things succinct.
*/
package tech.pegasys.pantheon.consensus.common;
+import static com.google.common.base.Preconditions.checkNotNull;
import static tech.pegasys.pantheon.consensus.common.VoteType.ADD;
import tech.pegasys.pantheon.ethereum.core.Address;
import java.util.Objects;
public class ValidatorVote {
private final VoteType votePolarity; |
codereview_java_data_11554 | * @param nodeName the node on which to do the query
*/
private void addQueryToNode(final XPath xPath, final String nodeName) {
- List<XPath> xPaths = nodeNameToXPaths.get(nodeName);
- if (xPaths == null) {
- xPaths = new ArrayList<>();
- nodeNameToXPaths.put(nodeName, xPaths);
}
- xPaths.add(xPath);
}
private BaseXPath createXPath(final String xpathQueryString, final Navigator navigator) throws JaxenException {
`xPaths` and `xPath` are so similar they're confusing, maybe rename the list to `xpathsForNode`
* @param nodeName the node on which to do the query
*/
private void addQueryToNode(final XPath xPath, final String nodeName) {
+ List<XPath> xPathsForNode = nodeNameToXPaths.get(nodeName);
+ if (xPathsForNode == null) {
+ xPathsForNode = new ArrayList<>();
+ nodeNameToXPaths.put(nodeName, xPathsForNode);
}
+ xPathsForNode.add(xPath);
}
private BaseXPath createXPath(final String xpathQueryString, final Navigator navigator) throws JaxenException { |
codereview_java_data_11558 | false,
(res) -> {
if (res.failed()) {
- LOG.debug("Request for metrics failed", res.cause());
response.setStatusCode(HttpResponseStatus.INTERNAL_SERVER_ERROR.code()).end();
} else if (response.closed()) {
LOG.trace("Request for metrics closed before response was generated");
Should this be a `warn` or `error`?
false,
(res) -> {
if (res.failed()) {
+ LOG.error("Request for metrics failed", res.cause());
response.setStatusCode(HttpResponseStatus.INTERNAL_SERVER_ERROR.code()).end();
} else if (response.closed()) {
LOG.trace("Request for metrics closed before response was generated"); |
codereview_java_data_11560 | */
static <T> Iterator<T> fill(int n, Supplier<? extends T> s) {
Objects.requireNonNull(s, "s is null");
- return Collections.fill(n, s, Iterator.empty(), Iterator::of);
}
/**
here we should also use the lazy `Collections.tabulate(n, f)` method
*/
static <T> Iterator<T> fill(int n, Supplier<? extends T> s) {
Objects.requireNonNull(s, "s is null");
+ return Collections.fill(n, s);
}
/** |
codereview_java_data_11563 | protected Address[] mCc;
protected Address[] mBcc;
protected Address[] mReplyTo;
- protected Address[] mX_OriginalTo;
- protected Address[] mDeliveredTo;
- protected Address[] mX_EnvelopeTo;
protected String mMessageId;
private String[] mReferences;
We're moving away from using the `m` prefix for fields. Please don't use it for new fields. Also, please don't use underscores in field names. This should be ```java protected Address[] xOriginalTo; protected Address[] deliveredTo; protected Address[] xEnvelopeTo; ```
protected Address[] mCc;
protected Address[] mBcc;
protected Address[] mReplyTo;
+ protected Address[] xOriginalTo;
+ protected Address[] deliveredTo;
+ protected Address[] xEnvelopeTo;
protected String mMessageId;
private String[] mReferences; |
codereview_java_data_11565 | private static boolean allowAllClasses;
private static HashSet<String> allowedClassNames;
- private static Profiler profiler;
/**
* In order to manage more than one class loader
This change is wrong. Each web session must use an own profiler as it was before your changes. You need to find some other way to deal with it.
private static boolean allowAllClasses;
private static HashSet<String> allowedClassNames;
/**
* In order to manage more than one class loader |
codereview_java_data_11566 | translate(kList.get(i)).expression().toString());
}
if (hasBinder) {
- binders.pop();
}
return new SMTLibTerm(expression);
}
This whole feature is extraordinary error-prone. I'd say a warning must be added here when it is used.
translate(kList.get(i)).expression().toString());
}
if (hasBinder) {
+ smtlibForallOrExistsBinders.pop();
}
return new SMTLibTerm(expression);
} |
codereview_java_data_11569 | Pair<Integer,String> pair1 = new Pair<>(25, "twenty-five");
Pair<Integer,String> pair2 = new Pair<>(25, "twenty-five");
Pair<Integer,String> pair3 = new Pair<>(null, null);
assertNotSame(pair1, pair2);
assertEquals(pair1.hashCode(), pair2.hashCode());
assertNotSame(pair2, pair3);
- assertNotEquals(pair2.hashCode(), pair3.hashCode());
}
/**
Could add check the case where "first" is equal, but "second" is different and vice-versa.
Pair<Integer,String> pair1 = new Pair<>(25, "twenty-five");
Pair<Integer,String> pair2 = new Pair<>(25, "twenty-five");
Pair<Integer,String> pair3 = new Pair<>(null, null);
+ Pair<Integer,String> pair4 = new Pair<>(25, "twentyfive");
+ Pair<Integer,String> pair5 = new Pair<>(225, "twenty-five");
assertNotSame(pair1, pair2);
assertEquals(pair1.hashCode(), pair2.hashCode());
assertNotSame(pair2, pair3);
+ assertNotEquals(pair1.hashCode(), pair4.hashCode());
+ assertNotEquals(pair1.hashCode(), pair5.hashCode());
}
/** |
codereview_java_data_11573 | import com.github.javaparser.symbolsolver.resolution.typesolvers.ReflectionTypeSolver;
import org.junit.Test;
-import java.io.File;
-import java.io.FileInputStream;
import java.io.IOException;
import static org.junit.Assert.assertEquals;
Try slowly refactoring things from File to Path.
import com.github.javaparser.symbolsolver.resolution.typesolvers.ReflectionTypeSolver;
import org.junit.Test;
import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
import static org.junit.Assert.assertEquals; |
codereview_java_data_11578 | import de.danoeh.antennapod.core.storage.DBReader;
import org.junit.Before;
-import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mockito;
import org.robolectric.RobolectricTestRunner;
Do you think this could somehow interfere with the other tests, causing the NPEs?
import de.danoeh.antennapod.core.storage.DBReader;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
+import org.mockito.MockedStatic;
import org.mockito.Mockito;
import org.robolectric.RobolectricTestRunner; |
codereview_java_data_11585 | this.serialized = serialized;
}
- MaterialColor(int lightThemeLightColor, int lightThemeDarkColor,
- int darkThemeLightColor, int darkThemeDarkColor, String serialized)
{
- this(lightThemeLightColor, lightThemeLightColor, lightThemeDarkColor,
- darkThemeLightColor, darkThemeLightColor, darkThemeDarkColor, serialized);
}
public int toConversationColor(@NonNull Context context) {
lightColor, darkColor, lightStatusBarColor, darkStatusBarColor
this.serialized = serialized;
}
+ MaterialColor(int lightColor, int darkColor,
+ int lightStatusBarColor, int darkStatusBarColor, String serialized)
{
+ this(lightColor, lightColor, lightStatusBarColor,
+ darkColor, darkColor, darkStatusBarColor, serialized);
}
public int toConversationColor(@NonNull Context context) { |
codereview_java_data_11590 | this(pollDelay, pollTimeUnit, Optional.<Lock> absent());
}
- protected SingularityLeaderOnlyPoller(long pollDelay, TimeUnit pollTimeUnit, Optional<Lock> lockHolder) {
this.pollDelay = pollDelay;
this.pollTimeUnit = pollTimeUnit;
this.lockHolder = lockHolder;
That constructor will only ever be called if a lock is present. Make it take the lock directly and have it wrap in an optional here. Also, add a checkNotNull(). :-) This avoids people to cut'n'paste your code and then replacing the super with `super (... , Optional.<Lock>absent())`
this(pollDelay, pollTimeUnit, Optional.<Lock> absent());
}
+ protected SingularityLeaderOnlyPoller(long pollDelay, TimeUnit pollTimeUnit, Lock lock) {
+ this(pollDelay, pollTimeUnit, Optional.of(lock));
+ }
+
+ private SingularityLeaderOnlyPoller(long pollDelay, TimeUnit pollTimeUnit, Optional<Lock> lockHolder) {
this.pollDelay = pollDelay;
this.pollTimeUnit = pollTimeUnit;
this.lockHolder = lockHolder; |
codereview_java_data_11591 | private final ObservableXPathRuleBuilder ruleBuilder = new ObservableXPathRuleBuilder();
- private static final Duration XPATH_REFRESH = Duration.ofMillis(3000);
@FXML
private PropertyTableView propertyView;
This delay is too long. Like for the editor, 100 millis would be nicer
private final ObservableXPathRuleBuilder ruleBuilder = new ObservableXPathRuleBuilder();
+ private static final Duration XPATH_REFRESH = Duration.ofMillis(300);
@FXML
private PropertyTableView propertyView; |
codereview_java_data_11598 | testSource();
testDynamicArgumentAndReturn();
testUUID();
- testBase64();
testWhiteSpacesInParameters();
testSchemaSearchPath();
testDeterministic();
We prefer pure SQL test cases, see `TestScript` and `h2/src/test/org/h2/test/scripts/functions/string`.
testSource();
testDynamicArgumentAndReturn();
testUUID();
testWhiteSpacesInParameters();
testSchemaSearchPath();
testDeterministic(); |
codereview_java_data_11600 | long length();
/**
- * @return a list of offsets for file blocks, if applicable, null otherwise. When available, this
- * information is used for planning scan tasks whose boundaries are determined by these offsets.
- * It is important that the returned list is sorted in ascending order.
* Only valid after the file is closed.
*/
default List<Long> splitOffsets() {
I missed this earlier, but why does this say "file blocks"? This should probably be "recommended split locations".
long length();
/**
+ * @return a list of recommended split locations, if applicable, null otherwise. When available,
+ * this information is used for planning scan tasks whose boundaries are determined by these offsets.
+ * The returned list must be sorted in ascending order.
* Only valid after the file is closed.
*/
default List<Long> splitOffsets() { |
codereview_java_data_11624 | });
mCryptoSupportSignOnly.setChecked(mAccount.getCryptoSupportSignOnly());
- mCryptoSupportSignOnly.setOnPreferenceChangeListener(new Preference.OnPreferenceChangeListener() {
- public boolean onPreferenceChange(Preference preference, Object newValue) {
- boolean value = (Boolean) newValue;
- mCryptoSupportSignOnly.setChecked(value);
- return false;
- }
- });
} else {
final Preference mCryptoMenu = findPreference(PREFERENCE_CRYPTO);
mCryptoMenu.setEnabled(false);
Looks like the default behavior will do and there's no need for a custom `OnPreferenceChangeListener`.
});
mCryptoSupportSignOnly.setChecked(mAccount.getCryptoSupportSignOnly());
} else {
final Preference mCryptoMenu = findPreference(PREFERENCE_CRYPTO);
mCryptoMenu.setEnabled(false); |
codereview_java_data_11626 | @Override
public SnapshotTable.Result execute() {
JobGroupInfo info = newJobGroupInfo("SNAPSHOT-TABLE",
- String.format("Snapshotting table %s(location=%s)", sourceTableIdent().toString(), destTableLocation));
return withJobGroupInfo(info, this::doExecute);
}
I think it is important to show the dest table identifier. Also, the location can be null if not provided. Since we are not showing table props and the location can be long, I'll be fine just saying this: ``` Snapshotting table %s as %s ```
@Override
public SnapshotTable.Result execute() {
JobGroupInfo info = newJobGroupInfo("SNAPSHOT-TABLE",
+ String.format("Snapshotting table %s as %s", sourceTableIdent().toString(), destTableIdent.toString()));
return withJobGroupInfo(info, this::doExecute);
} |
codereview_java_data_11628 | public TabletMigration(TabletId tabletId, TabletServerId oldTabletServer,
TabletServerId newTabletServer) {
- requireNonNull(tabletId);
- requireNonNull(oldTabletServer);
- requireNonNull(newTabletServer);
-
- this.tabletId = tabletId;
- this.oldTabletServer = oldTabletServer;
- this.newTabletServer = newTabletServer;
}
public TabletId getTablet() {
```suggestion this.tabletId = requireNonNull(tabletId); this.oldTabletServer = requireNonNull(oldTabletServer); this.newTabletServer = requireNonNull(newTabletServer); ```
public TabletMigration(TabletId tabletId, TabletServerId oldTabletServer,
TabletServerId newTabletServer) {
+ this.tabletId = requireNonNull(tabletId);
+ this.oldTabletServer = requireNonNull(oldTabletServer);
+ this.newTabletServer = requireNonNull(newTabletServer);
}
public TabletId getTablet() { |
codereview_java_data_11635 | requestHeader.setHaServerAddr(haServerAddr);
requestHeader.setCompressed(true);
RemotingCommand request = RemotingCommand.createRequestCommand(RequestCode.REGISTER_BROKER, requestHeader);
-
RegisterBrokerBody requestBody = new RegisterBrokerBody();
requestBody.setTopicConfigSerializeWrapper(topicConfigWrapper);
requestBody.setFilterServerList(filterServerList);
Great idea! How about setting a threshold for compression being triggered, because if the data you compress is small, it overhead is higher than the benefits you get from the compression?
requestHeader.setHaServerAddr(haServerAddr);
requestHeader.setCompressed(true);
RemotingCommand request = RemotingCommand.createRequestCommand(RequestCode.REGISTER_BROKER, requestHeader);
+ if (request.getVersion() <= 0) {
+ request.setVersion(MQVersion.CURRENT_VERSION);
+ }
RegisterBrokerBody requestBody = new RegisterBrokerBody();
requestBody.setTopicConfigSerializeWrapper(topicConfigWrapper);
requestBody.setFilterServerList(filterServerList); |
codereview_java_data_11639 | Range seekRange = range;
if (range.getEndKey() != null) {
- Key seekKey = new Key(seekRange.getEndKey());
if (range.getEndKey().getTimestamp() != Long.MIN_VALUE) {
seekKey = new Key(seekRange.getEndKey());
seekKey.setTimestamp(Long.MIN_VALUE);
This always creates a new Key, but its not always used. Also, in the following if stmt block it recreates the key and does not used the one created here.
Range seekRange = range;
if (range.getEndKey() != null) {
+ Key seekKey = seekRange.getEndKey();
if (range.getEndKey().getTimestamp() != Long.MIN_VALUE) {
seekKey = new Key(seekRange.getEndKey());
seekKey.setTimestamp(Long.MIN_VALUE); |
codereview_java_data_11640 | ZooReader zr = context.getZooReaderWriter();
String zPath = propPathPrefix + "/" + prop.getKey();
try {
- if (zr.exists(zPath) && zr.getData(zPath, null) != null) {
return true;
}
} catch (KeeperException|InterruptedException e) {
I am not sure the calls to `exists()` and `getData()` are needed. May be able to just call `exists()`
ZooReader zr = context.getZooReaderWriter();
String zPath = propPathPrefix + "/" + prop.getKey();
try {
+ if (zr.exists(zPath)) {
return true;
}
} catch (KeeperException|InterruptedException e) { |
codereview_java_data_11641 | package fr.free.nrw.commons;
-import fr.free.nrw.commons.contributions.model.DisplayableContribution;
public interface ViewHolder<T> {
- void init(int position,
- DisplayableContribution contribution);
}
I'm a bit confused by this. `ViewHolder` is meant to be a generic holder object for a model (the generic `T`) and the Android `context`. It doesn't see appropriate to convert this to such a specific method. Perhaps this happened with some automated refactoring? I would strongly urge you to change it back and to introduce a new interface if necessary. If you really want to go down this route, then you need to at least change `ViewHolder` to not have a generic parameter and rename the class to something more specific, like `DisplayableContributionHolder`. You may also want to move it to `commons/contributions` since that's where it seems to be used.
package fr.free.nrw.commons;
+import android.content.Context;
public interface ViewHolder<T> {
+ void bindModel(Context context, T model);
} |
codereview_java_data_11652 | private static final long serialVersionUID = -1973296520918624767L;
private static final int MAX_BODY_SIZE_FOR_DATABASE = 16 * 1024;
static final long INVALID_MESSAGE_PART_ID = -1;
- private static final int INVALID_UID_VALIDITY = -1;
- private static final int INVALID_HIGHEST_MOD_SEQ = -1;
private final LocalStore localStore;
private final AttachmentInfoExtractor attachmentInfoExtractor;
Redefining the same constants as in the library. Maybe they should be visible if you need them here?
private static final long serialVersionUID = -1973296520918624767L;
private static final int MAX_BODY_SIZE_FOR_DATABASE = 16 * 1024;
static final long INVALID_MESSAGE_PART_ID = -1;
private final LocalStore localStore;
private final AttachmentInfoExtractor attachmentInfoExtractor; |
codereview_java_data_11670 | private List<Group> groups;
private List<Policy> policies;
private List<ServiceIdentity> services;
- private Map<String, StringList> tags;
private Domain domain = null;
public AthenzDomain(String name) {
why did we add tags as a field here? The domain object below already has the tags as a field since it's part of the meta data so I would expect that the domain object contains the tags when the AthenzDomain object was created thus there is no need for this extra field here.
private List<Group> groups;
private List<Policy> policies;
private List<ServiceIdentity> services;
private Domain domain = null;
public AthenzDomain(String name) { |
codereview_java_data_11673 | config = createMap(accumuloPropsLocation, overrides);
}
- private static Map<String,String> createMap(URL accumuloPropsLocation,
Map<String,String> overrides) {
CompositeConfiguration config = new CompositeConfiguration();
config.setThrowExceptionOnMissing(false);
Could use the guava immutable map builder
config = createMap(accumuloPropsLocation, overrides);
}
+ private static ImmutableMap<String,String> createMap(URL accumuloPropsLocation,
Map<String,String> overrides) {
CompositeConfiguration config = new CompositeConfiguration();
config.setThrowExceptionOnMissing(false); |
codereview_java_data_11686 | import org.apache.iceberg.types.Types;
public interface Unbound<T, B> {
/**
* Bind this value expression to concrete types.
Does it have to extend `ValueExpression`?
import org.apache.iceberg.types.Types;
+/**
+ * Represents an unbound expression node.
+ * @param <T> the Java type of values produced by this node
+ * @param <B> the Java type produced when this node is bound using {@link #bind(Types.StructType, boolean)}
+ */
public interface Unbound<T, B> {
/**
* Bind this value expression to concrete types. |
codereview_java_data_11694 | System.out.println(indent + "Up to size Count %-age");
for (int i = 1; i < countBuckets.length; i++) {
System.out.println(String.format("%s%11s : %10d %6.2f%%", indent,
- bigNumberForSize(Double.valueOf(Math.pow(10, i)).longValue()), countBuckets[i],
sizeBuckets[i] * 100. / totalSize));
}
}
```suggestion NumUtil.bigNumberForSize(Double.valueOf(Math.pow(10, i)).longValue()), countBuckets[i], ```
System.out.println(indent + "Up to size Count %-age");
for (int i = 1; i < countBuckets.length; i++) {
System.out.println(String.format("%s%11s : %10d %6.2f%%", indent,
+ NumUtil.bigNumberForSize(Double.valueOf(Math.pow(10, i)).longValue()), countBuckets[i],
sizeBuckets[i] * 100. / totalSize));
}
} |
codereview_java_data_11695 | scanState.scanID = null;
if (scanState.isolated) {
- child2.recordException(e, Attributes.builder().put("exception.message", e.getMessage())
- .put("exception.escaped", true).build());
throw new IsolationException();
}
- child2.recordException(e, Attributes.builder().put("exception.message", e.getMessage())
- .put("exception.escaped", false).build());
sleepMillis = pause(sleepMillis, maxSleepTime);
} finally {
child2.end();
Do we need to explicitly add the `exception.message` every time we call `recordException`? It seems like it should do that much on its own at least.
scanState.scanID = null;
if (scanState.isolated) {
+ TraceUtil.setException(child2, e, true);
throw new IsolationException();
}
+ TraceUtil.setException(child2, e, false);
sleepMillis = pause(sleepMillis, maxSleepTime);
} finally {
child2.end(); |
codereview_java_data_11696 | for (String file : opts.files) {
AccumuloConfiguration aconf = DefaultConfiguration.getInstance();
- CryptoService cryptoService = CryptoServiceFactory.newInstance(aconf);
Path path = new Path(file);
CachableBlockFile.Reader rdr = new CachableBlockFile.Reader(fs, path, conf, null, null, aconf,
cryptoService);
aconf is default configuration
for (String file : opts.files) {
AccumuloConfiguration aconf = DefaultConfiguration.getInstance();
+ CryptoService cryptoService = ConfigurationTypeHelper.getClassInstance(null, opts.cryptoClass,
+ CryptoService.class, new NoCryptoService());
Path path = new Path(file);
CachableBlockFile.Reader rdr = new CachableBlockFile.Reader(fs, path, conf, null, null, aconf,
cryptoService); |
codereview_java_data_11701 | throw new UnsupportedOperationException("Out of scope for antlr current implementations");
}
@Override
default boolean hasImageEqualTo(final String image) {
throw new UnsupportedOperationException("Out of scope for antlr current implementations");
You should return null here instead. Null is an acceptable default value for the image attribute.
throw new UnsupportedOperationException("Out of scope for antlr current implementations");
}
+ @Override
+ default String getImage() {
+ throw new UnsupportedOperationException("Out of scope for antlr current implementations");
+ }
+
@Override
default boolean hasImageEqualTo(final String image) {
throw new UnsupportedOperationException("Out of scope for antlr current implementations"); |
codereview_java_data_11708 | * @return Returns a reference to this object so that method calls can be chained together.
*/
default Builder destination(File destination) {
return destination(destination.toPath());
}
Null Check/Validations missing
* @return Returns a reference to this object so that method calls can be chained together.
*/
default Builder destination(File destination) {
+ Validate.paramNotNull(destination, "destination");
return destination(destination.toPath());
} |
codereview_java_data_11724 | processFetchResponses(remoteMessagesToDownload, qresyncParamResponse, flagSyncHelper, syncHelper);
int newLocalMessageCount = remoteMessagesToDownload.size() + localFolder.getMessageCount();
if (imapFolder.getMessageCount() >= localFolder.getVisibleLimit() && imapFolder.getMessageCount() >=
newLocalMessageCount) {
- findOldRemoteMessagesToDownload(remoteMessagesToDownload, syncHelper);
}
int messageDownloadCount = remoteMessagesToDownload.size();
Looks like I made a mistake here. The SELECT response for QRESYNC gives the UID + flags for any new messages that arrived since the last sync. However, in this line, we fetch only the UIDs (via UID SEARCH) of old messages that need to be need to be downloaded to fill the mailbox. This means that the flags for old messages are never downloaded. I think the best way to fix this would be to use a FETCH command here instead of a UID SEARCH to fetch the UIDs and flags of old messages using their sequence numbers. This will mean adding support for FETCH (we only support UID FETCH currently). Is there a better way ? @Valodim
processFetchResponses(remoteMessagesToDownload, qresyncParamResponse, flagSyncHelper, syncHelper);
+ boolean flaglessMessagesPresent = false;
int newLocalMessageCount = remoteMessagesToDownload.size() + localFolder.getMessageCount();
if (imapFolder.getMessageCount() >= localFolder.getVisibleLimit() && imapFolder.getMessageCount() >=
newLocalMessageCount) {
+ flaglessMessagesPresent = findOldRemoteMessagesToDownload(remoteMessagesToDownload, syncHelper);
}
int messageDownloadCount = remoteMessagesToDownload.size(); |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.