problem
stringlengths
26
131k
labels
class label
2 classes
clang-format for boost program options : <p><code>clang-format</code> seems to make a big mess out of blocks like this:</p> <pre><code>desc.add_options()("help", "output usage") ("inputDirectory", po::value&lt;boost::filesystem::path&gt;()-&gt;required(), "The input path") ("outputDirectory", po::value&lt;boost::filesystem::path&gt;()-&gt;required(), "The output path"); </code></pre> <p>I know about <code>// clang-format off</code> to explicitly not format a block, but is there a set of configuration rules to make it do something reasonable with this?</p>
0debug
HELPER_LD_ATOMIC(ll, lw) #ifdef TARGET_MIPS64 HELPER_LD_ATOMIC(lld, ld) #endif #undef HELPER_LD_ATOMIC #define HELPER_ST_ATOMIC(name, ld_insn, st_insn, almask) \ target_ulong helper_##name(CPUMIPSState *env, target_ulong arg1, \ target_ulong arg2, int mem_idx) \ { \ target_long tmp; \ \ if (arg2 & almask) { \ env->CP0_BadVAddr = arg2; \ helper_raise_exception(env, EXCP_AdES); \ } \ if (do_translate_address(env, arg2, 1) == env->lladdr) { \ tmp = do_##ld_insn(env, arg2, mem_idx); \ if (tmp == env->llval) { \ do_##st_insn(env, arg2, arg1, mem_idx); \ return 1; \ } \ } \ return 0; \ } HELPER_ST_ATOMIC(sc, lw, sw, 0x3) #ifdef TARGET_MIPS64 HELPER_ST_ATOMIC(scd, ld, sd, 0x7) #endif #undef HELPER_ST_ATOMIC #endif #ifdef TARGET_WORDS_BIGENDIAN #define GET_LMASK(v) ((v) & 3) #define GET_OFFSET(addr, offset) (addr + (offset)) #else #define GET_LMASK(v) (((v) & 3) ^ 3) #define GET_OFFSET(addr, offset) (addr - (offset)) #endif target_ulong helper_lwl(CPUMIPSState *env, target_ulong arg1, target_ulong arg2, int mem_idx) { target_ulong tmp; tmp = do_lbu(env, arg2, mem_idx); arg1 = (arg1 & 0x00FFFFFF) | (tmp << 24); if (GET_LMASK(arg2) <= 2) { tmp = do_lbu(env, GET_OFFSET(arg2, 1), mem_idx); arg1 = (arg1 & 0xFF00FFFF) | (tmp << 16); } if (GET_LMASK(arg2) <= 1) { tmp = do_lbu(env, GET_OFFSET(arg2, 2), mem_idx); arg1 = (arg1 & 0xFFFF00FF) | (tmp << 8); } if (GET_LMASK(arg2) == 0) { tmp = do_lbu(env, GET_OFFSET(arg2, 3), mem_idx); arg1 = (arg1 & 0xFFFFFF00) | tmp; } return (int32_t)arg1; }
1threat
What's the difference between "memory" and "memory footprint" fields on Chrome's task manager? : <p>I'm using Chrome 64 and noticed that there's two fields called "memory" on Chrome's task manager. See the picture below:</p> <p><a href="https://i.stack.imgur.com/DmR2G.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DmR2G.png" alt="memory and memory footprint"></a></p> <p>I can't find any explanation of the difference between these fields on Chrome, there's no tooltips available (at least not on macOS). The "memory footprint" field seems to be new, because I don't recall seeing it before yesterday.</p>
0debug
Field with @CreationTimestamp annotation is null while save on repository : <p>(1) Why is the "@CreationTimestamp" field updated to null for a "save" called on the repository with a null value for that field? I expect that a field annotated with "@CreationTimestamp" is never updated and maintained only once at the time of creation. But it does not work that way in my current project.</p> <p>(2) I had to include @Column(updatable =false) (in addition to @CreationTimestamp annotation). Why is this necessary?</p>
0debug
Where I making mistake in the following code using matlab? : I have written a code : for i = 1:length(Speed) new=find(Speed>edges,1,'last'); N(new)=N(new)+1; end Speed = 3000x1 ; edge = 50x1; N =50x1. After running this code I am getting the following code 'Error using > Matrix dimensions must agree' I want to store the value in 'new' whenever Speed >edges. but I am getting this error.
0debug
how fetch data beetwin two dates with axios : How fetch data between two dates with axios ? I tried this but without success : ``` const res = axios.get("/activities", { params: { date: { gte: startDate, lte: endDate }, user: id } }); ```
0debug
static av_cold int mpc7_decode_init(AVCodecContext * avctx) { int i, j; MPCContext *c = avctx->priv_data; GetBitContext gb; LOCAL_ALIGNED_16(uint8_t, buf, [16]); static int vlc_initialized = 0; static VLC_TYPE scfi_table[1 << MPC7_SCFI_BITS][2]; static VLC_TYPE dscf_table[1 << MPC7_DSCF_BITS][2]; static VLC_TYPE hdr_table[1 << MPC7_HDR_BITS][2]; static VLC_TYPE quant_tables[7224][2]; if (avctx->channels != 2) { av_log_ask_for_sample(avctx, "Unsupported number of channels: %d\n", avctx->channels); return AVERROR_PATCHWELCOME; } if(avctx->extradata_size < 16){ av_log(avctx, AV_LOG_ERROR, "Too small extradata size (%i)!\n", avctx->extradata_size); return -1; } memset(c->oldDSCF, 0, sizeof(c->oldDSCF)); av_lfg_init(&c->rnd, 0xDEADBEEF); ff_dsputil_init(&c->dsp, avctx); ff_mpadsp_init(&c->mpadsp); c->dsp.bswap_buf((uint32_t*)buf, (const uint32_t*)avctx->extradata, 4); ff_mpc_init(); init_get_bits(&gb, buf, 128); c->IS = get_bits1(&gb); c->MSS = get_bits1(&gb); c->maxbands = get_bits(&gb, 6); if(c->maxbands >= BANDS){ av_log(avctx, AV_LOG_ERROR, "Too many bands: %i\n", c->maxbands); return -1; } skip_bits_long(&gb, 88); c->gapless = get_bits1(&gb); c->lastframelen = get_bits(&gb, 11); av_log(avctx, AV_LOG_DEBUG, "IS: %d, MSS: %d, TG: %d, LFL: %d, bands: %d\n", c->IS, c->MSS, c->gapless, c->lastframelen, c->maxbands); c->frames_to_skip = 0; avctx->sample_fmt = AV_SAMPLE_FMT_S16; avctx->channel_layout = AV_CH_LAYOUT_STEREO; if(vlc_initialized) return 0; av_log(avctx, AV_LOG_DEBUG, "Initing VLC\n"); scfi_vlc.table = scfi_table; scfi_vlc.table_allocated = 1 << MPC7_SCFI_BITS; if(init_vlc(&scfi_vlc, MPC7_SCFI_BITS, MPC7_SCFI_SIZE, &mpc7_scfi[1], 2, 1, &mpc7_scfi[0], 2, 1, INIT_VLC_USE_NEW_STATIC)){ av_log(avctx, AV_LOG_ERROR, "Cannot init SCFI VLC\n"); return -1; } dscf_vlc.table = dscf_table; dscf_vlc.table_allocated = 1 << MPC7_DSCF_BITS; if(init_vlc(&dscf_vlc, MPC7_DSCF_BITS, MPC7_DSCF_SIZE, &mpc7_dscf[1], 2, 1, &mpc7_dscf[0], 2, 1, INIT_VLC_USE_NEW_STATIC)){ av_log(avctx, AV_LOG_ERROR, "Cannot init DSCF VLC\n"); return -1; } hdr_vlc.table = hdr_table; hdr_vlc.table_allocated = 1 << MPC7_HDR_BITS; if(init_vlc(&hdr_vlc, MPC7_HDR_BITS, MPC7_HDR_SIZE, &mpc7_hdr[1], 2, 1, &mpc7_hdr[0], 2, 1, INIT_VLC_USE_NEW_STATIC)){ av_log(avctx, AV_LOG_ERROR, "Cannot init HDR VLC\n"); return -1; } for(i = 0; i < MPC7_QUANT_VLC_TABLES; i++){ for(j = 0; j < 2; j++){ quant_vlc[i][j].table = &quant_tables[quant_offsets[i*2 + j]]; quant_vlc[i][j].table_allocated = quant_offsets[i*2 + j + 1] - quant_offsets[i*2 + j]; if(init_vlc(&quant_vlc[i][j], 9, mpc7_quant_vlc_sizes[i], &mpc7_quant_vlc[i][j][1], 4, 2, &mpc7_quant_vlc[i][j][0], 4, 2, INIT_VLC_USE_NEW_STATIC)){ av_log(avctx, AV_LOG_ERROR, "Cannot init QUANT VLC %i,%i\n",i,j); return -1; } } } vlc_initialized = 1; avcodec_get_frame_defaults(&c->frame); avctx->coded_frame = &c->frame; return 0; }
1threat
static int do_sigframe_return_v2(CPUARMState *env, target_ulong frame_addr, struct target_ucontext_v2 *uc) { sigset_t host_set; abi_ulong *regspace; target_to_host_sigset(&host_set, &uc->tuc_sigmask); sigprocmask(SIG_SETMASK, &host_set, NULL); if (restore_sigcontext(env, &uc->tuc_mcontext)) return 1; regspace = uc->tuc_regspace; if (arm_feature(env, ARM_FEATURE_VFP)) { regspace = restore_sigframe_v2_vfp(env, regspace); if (!regspace) { return 1; } } if (arm_feature(env, ARM_FEATURE_IWMMXT)) { regspace = restore_sigframe_v2_iwmmxt(env, regspace); if (!regspace) { return 1; } } if (do_sigaltstack(frame_addr + offsetof(struct target_ucontext_v2, tuc_stack), 0, get_sp_from_cpustate(env)) == -EFAULT) return 1; #if 0 if (ptrace_cancel_bpt(current)) send_sig(SIGTRAP, current, 1); #endif return 0; }
1threat
Copy or clone a collection in Julia : <p>I have created a one-dimensional array(vector) in Julia, namely, <code>a=[1, 2, 3, 4, 5]</code>. Then I want to create a new vector <code>b</code>, where <code>b</code> has exactly same elements in <code>a</code>, i.e <code>b=[1, 2, 3, 4, 5]</code>. </p> <p>It seems that directly use <code>b = a</code> just create a pointer for the original collection, which means if I modify <code>b</code> and <code>a</code> is mutable, the modification will also be reflected in <code>a</code>. For example, if I use <code>!pop(b)</code>, then <code>b=[1, 2, 3, 4]</code> and <code>a=[1, 2, 3, 4]</code>. </p> <p>I am wondering if there is a official function to merely copy or clone the collection, which the change in <code>b</code> will not happen in <code>a</code>. I find a solution is use <code>b = collect(a)</code>. I would appreciate that someone provide some other approaches.</p>
0debug
static av_cold int indeo3_decode_init(AVCodecContext *avctx) { Indeo3DecodeContext *s = avctx->priv_data; s->avctx = avctx; s->width = avctx->width; s->height = avctx->height; avctx->pix_fmt = PIX_FMT_YUV410P; build_modpred(s); iv_alloc_frames(s); return 0; }
1threat
static inline uint64_t hpet_calculate_diff(HPETTimer *t, uint64_t current) { if (t->config & HPET_TN_32BIT) { uint32_t diff, cmp; cmp = (uint32_t)t->cmp; diff = cmp - (uint32_t)current; diff = (int32_t)diff > 0 ? diff : (uint32_t)0; return (uint64_t)diff; } else { uint64_t diff, cmp; cmp = t->cmp; diff = cmp - current; diff = (int64_t)diff > 0 ? diff : (uint64_t)0; return diff; } }
1threat
SparkContext.addFile vs spark-submit --files : <p>I am using Spark 1.6.0. I want to pass some properties files like log4j.properties and some other customer properties file. I see that we can use --files but I also saw that there is a method addFile in SparkContext. I did prefer to use --files instead of programatically adding the files, assuming both the options are same ?</p> <p>I did not find much documentation about --files, so is --files &amp; SparkContext.addFile both options same ?</p> <p>References I found about <a href="http://spark.apache.org/docs/1.6.0/running-on-yarn.html#configuration" rel="noreferrer">--files</a> and for <a href="https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/SparkContext.html#addFile(java.lang.String)" rel="noreferrer">SparkContext.addFile</a>.</p>
0debug
Use Jquery to show an element by its ID : <p>I know this is a simple question but I couldn't find the answer, I know you can use Jquery's show method on a class such as: </p> <pre><code>$('.class1').show(); </code></pre> <p>But if I just want to call it on an element but not its entire class, for example with an id name of "am0" how do I do that? I tried:</p> <pre><code>$("am0").show(); </code></pre> <p>But this doesn't work. </p>
0debug
static int rv34_decode_slice(RV34DecContext *r, int end, const uint8_t* buf, int buf_size) { MpegEncContext *s = &r->s; GetBitContext *gb = &s->gb; int mb_pos; int res; init_get_bits(&r->s.gb, buf, buf_size*8); res = r->parse_slice_header(r, gb, &r->si); if(res < 0){ av_log(s->avctx, AV_LOG_ERROR, "Incorrect or unknown slice header\n"); return -1; if ((s->mb_x == 0 && s->mb_y == 0) || s->current_picture_ptr==NULL) { if(s->width != r->si.width || s->height != r->si.height){ av_log(s->avctx, AV_LOG_DEBUG, "Changing dimensions to %dx%d\n", r->si.width,r->si.height); MPV_common_end(s); s->width = r->si.width; s->height = r->si.height; avcodec_set_dimensions(s->avctx, s->width, s->height); if(MPV_common_init(s) < 0) return -1; r->intra_types_stride = s->mb_width*4 + 4; r->intra_types_hist = av_realloc(r->intra_types_hist, r->intra_types_stride * 4 * 2 * sizeof(*r->intra_types_hist)); r->intra_types = r->intra_types_hist + r->intra_types_stride * 4; r->mb_type = av_realloc(r->mb_type, r->s.mb_stride * r->s.mb_height * sizeof(*r->mb_type)); r->cbp_luma = av_realloc(r->cbp_luma, r->s.mb_stride * r->s.mb_height * sizeof(*r->cbp_luma)); r->cbp_chroma = av_realloc(r->cbp_chroma, r->s.mb_stride * r->s.mb_height * sizeof(*r->cbp_chroma)); r->deblock_coefs = av_realloc(r->deblock_coefs, r->s.mb_stride * r->s.mb_height * sizeof(*r->deblock_coefs)); s->pict_type = r->si.type ? r->si.type : AV_PICTURE_TYPE_I; if(MPV_frame_start(s, s->avctx) < 0) return -1; ff_er_frame_start(s); if (!r->tmp_b_block_base || s->width != r->si.width || s->height != r->si.height) { int i; av_free(r->tmp_b_block_base); r->tmp_b_block_base = av_malloc(s->linesize * 48); for (i = 0; i < 2; i++) r->tmp_b_block_y[i] = r->tmp_b_block_base + i * 16 * s->linesize; for (i = 0; i < 4; i++) r->tmp_b_block_uv[i] = r->tmp_b_block_base + 32 * s->linesize + (i >> 1) * 8 * s->uvlinesize + (i & 1) * 16; r->cur_pts = r->si.pts; if(s->pict_type != AV_PICTURE_TYPE_B){ r->last_pts = r->next_pts; r->next_pts = r->cur_pts; }else{ int refdist = GET_PTS_DIFF(r->next_pts, r->last_pts); int dist0 = GET_PTS_DIFF(r->cur_pts, r->last_pts); int dist1 = GET_PTS_DIFF(r->next_pts, r->cur_pts); if(!refdist){ r->weight1 = r->weight2 = 8192; }else{ r->weight1 = (dist0 << 14) / refdist; r->weight2 = (dist1 << 14) / refdist; s->mb_x = s->mb_y = 0; r->si.end = end; s->qscale = r->si.quant; r->bits = buf_size*8; s->mb_num_left = r->si.end - r->si.start; r->s.mb_skip_run = 0; mb_pos = s->mb_x + s->mb_y * s->mb_width; if(r->si.start != mb_pos){ av_log(s->avctx, AV_LOG_ERROR, "Slice indicates MB offset %d, got %d\n", r->si.start, mb_pos); s->mb_x = r->si.start % s->mb_width; s->mb_y = r->si.start / s->mb_width; memset(r->intra_types_hist, -1, r->intra_types_stride * 4 * 2 * sizeof(*r->intra_types_hist)); s->first_slice_line = 1; s->resync_mb_x = s->mb_x; s->resync_mb_y = s->mb_y; ff_init_block_index(s); while(!check_slice_end(r, s)) { ff_update_block_index(s); s->dsp.clear_blocks(s->block[0]); if(rv34_decode_macroblock(r, r->intra_types + s->mb_x * 4 + 4) < 0){ ff_er_add_slice(s, s->resync_mb_x, s->resync_mb_y, s->mb_x-1, s->mb_y, AC_ERROR|DC_ERROR|MV_ERROR); return -1; if (++s->mb_x == s->mb_width) { s->mb_x = 0; s->mb_y++; ff_init_block_index(s); memmove(r->intra_types_hist, r->intra_types, r->intra_types_stride * 4 * sizeof(*r->intra_types_hist)); memset(r->intra_types, -1, r->intra_types_stride * 4 * sizeof(*r->intra_types_hist)); if(r->loop_filter && s->mb_y >= 2) r->loop_filter(r, s->mb_y - 2); if(s->mb_x == s->resync_mb_x) s->first_slice_line=0; s->mb_num_left--; ff_er_add_slice(s, s->resync_mb_x, s->resync_mb_y, s->mb_x-1, s->mb_y, AC_END|DC_END|MV_END); return s->mb_y == s->mb_height;
1threat
How to Make Select All Check Box with PHP : i want when i click select all then all checkbox will selected and when i click again all checkbox will deselect. but when i click it nothing happen. my code <form id="sort_select_delete_form" method="get"> <div class="btn-group"> <a href="#check-all" class="btn btn-primary" id="check-all">Select All</a> <input type="submit" class="btn btn-danger" value="Delete Selected"> </div> <input class="form-check-input" type="checkbox" value="1000000001" id="deleteid[]" name="deleteid[]">select1 <input class="form-check-input" type="checkbox" value="1000000002" id="deleteid[]" name="deleteid[]">select2 <input class="form-check-input" type="checkbox" value="1000000003" id="deleteid[]" name="deleteid[]">select3 <input class="form-check-input" type="checkbox" value="1000000004" id="deleteid[]" name="deleteid[]">select4 <input class="form-check-input" type="checkbox" value="1000000005" id="deleteid[]" name="deleteid[]">select5 </form> js $(document).ready(function(){ $("#check-all").click(function(){ var check = $(this).attr('checked'); if(check=="checked") { $("#deleteid").attr('checked','checked'); } else $("#deleteid").removeAttr('checked'); }); }); [jsfiddle link][1] [1]: https://jsfiddle.net/qa2fkpyv/2/
0debug
Declaring a jagged array in C : I have a data in a following format: {{{0}},{{1}},{{2,3}},{{1,2},{3,4}},{{5,6},{7,8},{9,10}},.... Is there any way of storing this in a jagged array? The data is large, and I would like to include it directly in the code. I searched internet and it says I can declare it in a following way: { new int[][] { new int[] {0} }, new int[][] { new int[] {1} }, new int[][] { new int[] {2,3} }, new int[][] {new int[] {1,2} ,new int[] {3,4} } , ... but typing those new int[][] would be too time consuming and I am looking for a way that I can used the original data directly into the code. Is there a way to do that? Any suggestion would be appreciated!
0debug
SQL Not Executing : The first SQL is executing but the second one doesn't seem to work. When i change the query to the first one it works just fine but when I put it like that it doesn't seem to work for some reason. I've just started learning MySQL i'm really struggling with this one and understanding the language. //Classic One that checks if the hwid is there public void checkHWID(string HWID) { string line; using (SqlConnection con = new SqlConnection(connectionString)) { con.Open(); using (SqlCommand cmd = new SqlCommand("SELECT * FROM Users WHERE HWID = @HWID", con)) { cmd.Parameters.AddWithValue("@HWID", HWID); using (SqlDataReader reader = cmd.ExecuteReader()) { if (reader.Read()) { line = reader[1].ToString(); Console.Write(line); con.Close(); } else { updateHWID(HWID); } } } } } //This one doesn't seem to update the hwid but when i change the query to the first one it works just fine public void updateHWID(String HWID) { using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); using (SqlCommand command = new SqlCommand("INSERT INTO USERS(hwid) VALUES(@HWID)", connection)) { command.Parameters.AddWithValue("@HWID", HWID); connection.Close(); } } }
0debug
How to click the continue button in pop up window? : I have tried to click "**Continue**" button in the attached pop up window screen.However normal selenium "click" is not working.Could you please advise how to click the continue button in this pop-up window? [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/Ie8Yz.png ***HTML content of button:*** <button class="oval-btn last-popup-btn startang teal track-564677" data-track="{&quot;loc&quot;:&quot;popup&quot;,&quot;nm&quot;:&quot;tpg-tool-start-popup&quot;}">Continue</button>
0debug
static void gen_swa(DisasContext *dc, TCGv rb, TCGv ra, int32_t ofs) { TCGv ea, val; TCGLabel *lab_fail, *lab_done; ea = tcg_temp_new(); tcg_gen_addi_tl(ea, ra, ofs); lab_fail = gen_new_label(); lab_done = gen_new_label(); tcg_gen_brcond_tl(TCG_COND_NE, ea, cpu_lock_addr, lab_fail); tcg_temp_free(ea); val = tcg_temp_new(); tcg_gen_atomic_cmpxchg_tl(val, cpu_lock_addr, cpu_lock_value, rb, dc->mem_idx, MO_TEUL); tcg_gen_setcond_tl(TCG_COND_EQ, cpu_sr_f, val, cpu_lock_value); tcg_temp_free(val); tcg_gen_br(lab_done); gen_set_label(lab_fail); tcg_gen_movi_tl(cpu_sr_f, 0); gen_set_label(lab_done); tcg_gen_movi_tl(cpu_lock_addr, -1); }
1threat
how do i find the parent directory of ".." in C : I am iterating over all files in a folder. however if I encounter ".." I want to find it's parent directory. for example If current dir is **user/doc/..** I want to be able to get **user/doc/** I tried to find online but don't seem to be there. Thanks
0debug
sort dictionary with custom key in swift 2 : <p>I have a dictionary with keys in format: [1:ABC, 113:NUX, 78:BUN, 34:POI, 223:NTY]</p> <p>When I sorted the array of keys, i get the sorted key array as: [1:ABC, 113:NUX, 223:NTY, 34:POI, 78:BUN]</p> <p>But I want the sorted array as: [1:ABC, 34:POI, 78:BUN, 113:NUX, 223:NTY] </p> <p>what am I missing here? what additional sort should I add? * I am using Swift 2</p>
0debug
i have some problems in java Calss i cant get EditText as string : > package com.example.dato.task; > > > > import android.app.DialogFragment; import android.os.Bundle; import > android.support.annotation.Nullable; import > android.view.LayoutInflater; import android.view.View; import > android.view.ViewGroup; import android.view.Window; import > android.widget.Button; import android.widget.EditText; /** * Created > by DATO on 2/17/2018. */ > > public class DialogClass extends DialogFragment{ > > public DialogClass(){} > > String s; > String s1; > > @Nullable > @Override > public View onCreateView( LayoutInflater inflater, @Nullable ViewGroup container, Bundle savedInstanceState) { > final View dialogView= View.inflate(getActivity(), R.layout.dialog, null); > > getDialog().requestWindowFeature(Window.FEATURE_NO_TITLE); > > final EditText e1 = dialogView.findViewById(R.id.text1); > final EditText e2= dialogView.findViewById(R.id.text2); > Button bt = dialogView.findViewById(R.id.button); > > > > bt.setOnClickListener(new View.OnClickListener() { > @Override > public void onClick(View view){ > s = e1.getText().toString(); > s1 = e2.getText().toString(); > dismiss(); > } > }); > > return dialogView; > } > }
0debug
How to convert a base64 string into hex in android? : <p>I am getting a base64 string. How do I convert it to hex. I tried the followind but it isn't working</p> <pre><code>String guid = "YxRfXk827kPgkmMUX15PNg=="; byte[] decoded = Base64.decodeBase64(guid); String hexString = Hex.encodeHexString(decoded); System.out.println(hexString); </code></pre>
0debug
TypeScript for ... of with index / key? : <p>As described <a href="https://basarat.gitbooks.io/typescript/content/docs/for...of.html">here</a> TypeScript introduces a foreach loop:</p> <pre><code>var someArray = [9, 2, 5]; for (var item of someArray) { console.log(item); // 9,2,5 } </code></pre> <p>But isn't there any index/key? I would expect something like:</p> <pre><code>for (var item, key of someArray) { ... } </code></pre>
0debug
Center image in Bulma : <p>Is there a way to horizontally centered an image inside a card?</p> <p>I have the following</p> <pre><code> &lt;div class='column is-one-quarter has-text-centered'&gt; &lt;div class='card equal-height'&gt; &lt;div class='card-content'&gt; &lt;figure class='image is-64x64'&gt;&lt;img src='...'&gt;&lt;/figure&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>and I cannot center the image. I have tried to add <code>is-centered</code> both to the figure and to the parent div but nothing changes.</p> <p>Thanks.</p>
0debug
What is the difference between tf.estimator.Estimator and tf.contrib.learn.Estimator in TensorFlow : <p>Some months ago, I used the <code>tf.contrib.learn.DNNRegressor</code> API from TensorFlow, which I found very convenient to use. I didn't keep up with the development of TensorFlow the last few months. Now I have a project where I want to use a Regressor again, but with more control over the actual model as provided by <code>DNNRegressor</code>. As far as I can see, this is supported by the <code>Estimator</code> API using the <code>model_fn</code> parameter.</p> <p>But there are two <code>Estimator</code>s in the TensorFlow API:</p> <ul> <li><code>tf.contrib.learn.Estimator</code></li> <li><code>tf.estimator.Estimator</code></li> </ul> <p>Both provide a similar API, but are nevertheless slightly different in their usage. Why are there two different implementations and are there reasons to prefer one?</p> <p>Unfortunately, I can't find any differences in the TensorFlow documentation or a guide when to use which of both. Actually, working through the TensorFlow tutorials produced a lot of warnings as some of the interfaces apparently have changed (instead of the <code>x</code>,<code>y</code> parameter, the <code>input_fn</code> parameter et cetera).</p>
0debug
copy the text from one text box to another in on click : I have two textbox.I want to copy the text in key press event fron one textbox to another in jquery <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> $(document).ready(function(){ $(".first").keydown(function(){ $(".second").value($(".first").value); }); }); <!-- language: lang-html --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> Enter your name: <input type="text" class="first"> <input type="text" class="second"> <!-- end snippet -->
0debug
After building TensorFlow from source, seeing libcudart.so and libcudnn errors : <p>I'm building TensorFlow from source code. The build appears to succeed; however, when my TensorFlow program invokes <code>import tensorflow</code>, one or both of the following errors appear:</p> <ul> <li><code>ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory</code></li> <li><code>ImportError: libcudnn.5: cannot open shared object file: No such file or directory</code></li> </ul>
0debug
What is the purpose of the default keyword in this scene : <p>I saw a question, which looks like this:</p> <pre><code>public @interface Controller { /** * The value may indicate a suggestion for a logical component name, * to be turned into a Spring bean in case of an autodetected component. * @return the suggested component name, if any */ String value() default ""; </code></pre> <p>}</p> <p>What is the default keyword, and the "" after default?</p>
0debug
Utilizing bluetooth LE on Raspberry Pi using .Net Core : <p>I'd like to build a GATT client in .NET Core. It will deploy to a RPi3 running Raspbian Lite controlling multiple BLE devices. Is there currently support for Bluetooth LE in the .Net Core Framework (2.2 or 3 preview)? </p> <p>I'm aware of an alternative using a UWP library on Windows 10 IoT on the RPi, but I'd prefer to run Raspbian Lite instead. Are there currently any other alternatives for such a stack?</p>
0debug
Salvar CheckBoxes no Android Studio : Olá, sou iniciante em Java, e estou com dificuldade em salvar os status de checkbox's, preciso criar uma lista, e essas opções ficariam salvas localmente no próprio app. O código que eu tenho funciona perfeitamente para apenas 1 checkbox, quando tento adicionar outro, o segundo não funciona, salva os mesmos dados que o primeiro. Alguém poderia me ajudar? public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final CheckBox checkBox = (CheckBox) findViewById(R.id.checkBox); final CheckBox checkBox2 = (CheckBox) findViewById(R.id.checkBox2); SharedPreferences preferences = PreferenceManager.getDefaultSharedPreferences(this); final SharedPreferences.Editor editor = preferences.edit(); if (preferences.contains("checked") && preferences.getBoolean("checked", false) == true) { checkBox.setChecked(true); } else { checkBox.setChecked(false); } checkBox.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() { @Override public void onCheckedChanged(CompoundButton compoundButton, boolean b) { if (checkBox.isChecked()) { editor.putBoolean("checked", true); editor.apply(); } else { editor.putBoolean("checked", false); editor.apply(); } } }); SharedPreferences preferences2 = PreferenceManager.getDefaultSharedPreferences(this); final SharedPreferences.Editor editor2 = preferences2.edit(); if (preferences2.contains("checked") && preferences2.getBoolean("checked", false) == true) { checkBox2.setChecked(true); } else { checkBox2.setChecked(false); } checkBox2.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() { @Override public void onCheckedChanged(CompoundButton compoundButton, boolean b) { if (checkBox2.isChecked()) { editor2.putBoolean("checked", true); editor2.apply(); } else { editor2.putBoolean("checked", false); editor2.apply(); } } }); } }
0debug
Getting 24hr time when adding miliseconds to a Date? : <p>I have the following Java code that takes a date and should add a full day to the date:</p> <pre><code>SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ"); String dateString = "2017-01-30T19:00:00+0000" Date date = formatter.parse(dateString); long timeBetweenStartDates = 24 * 60 * 1000; Long DateWithOneDayAddedInMilis = date.getTime()+timeBetweenStartDates; Date dateWithOneDayAdded = new Date((DateWithOneDayAddedInMilis)); </code></pre> <p>The value I am getting for <code>dateWithOneDayAdded</code> is:</p> <pre><code>Mon Jan 30 13:24:00 GMT 2017 </code></pre> <p>What I am looking for here would be:</p> <pre><code> Tue Jan 31 13:24:00 GMT 2017 </code></pre> <p>How can I ensure that the date is in the format I expect?</p>
0debug
static unsigned int dec_movem_rm(DisasContext *dc) { TCGv tmp; TCGv addr; int i; DIS(fprintf (logfile, "movem $r%u, [$r%u%s\n", dc->op2, dc->op1, dc->postinc ? "+]" : "]")); cris_flush_cc_state(dc); tmp = tcg_temp_new(TCG_TYPE_TL); addr = tcg_temp_new(TCG_TYPE_TL); tcg_gen_movi_tl(tmp, 4); tcg_gen_mov_tl(addr, cpu_R[dc->op1]); for (i = 0; i <= dc->op2; i++) { gen_store(dc, addr, cpu_R[i], 4); tcg_gen_add_tl(addr, addr, tmp); } if (dc->postinc) tcg_gen_mov_tl(cpu_R[dc->op1], addr); cris_cc_mask(dc, 0); tcg_temp_free(tmp); tcg_temp_free(addr); return 2; }
1threat
R, group together two columns with ID, do the cumulative for two colums : I've read through countless posts trying to group together two columns and do the cumulative sum. The closest I've come to is: ave(contest$loser_points, (contest$loser), FUN=cumsum) What I however want is something that creates the columns Winner_Total and Loser_Total like below. So when 1 competes in the final match, his points from when he both won and lost are in the loser total column. I've previously done for loops but because the real dataset is 10,000 rows large it takes so much time. Winner Loser Winner_Points Loser_Points Winner_Total Loser_Total 1 2 5 5 5 5 2 3 4 2 9 2 3 1 12 2 14 7 4 3 2 6 2 20 1 3 1 6 8 26 2 1 6 2 15 10
0debug
static QString *qstring_from_escaped_str(JSONParserContext *ctxt, QObject *token) { const char *ptr = token_get_value(token); QString *str; int double_quote = 1; if (*ptr == '"') { double_quote = 1; } else { double_quote = 0; } ptr++; str = qstring_new(); while (*ptr && ((double_quote && *ptr != '"') || (!double_quote && *ptr != '\''))) { if (*ptr == '\\') { ptr++; switch (*ptr) { case '"': qstring_append(str, "\""); ptr++; break; case '\'': qstring_append(str, "'"); ptr++; break; case '\\': qstring_append(str, "\\"); ptr++; break; case '/': qstring_append(str, "/"); ptr++; break; case 'b': qstring_append(str, "\b"); ptr++; break; case 'f': qstring_append(str, "\f"); ptr++; break; case 'n': qstring_append(str, "\n"); ptr++; break; case 'r': qstring_append(str, "\r"); ptr++; break; case 't': qstring_append(str, "\t"); ptr++; break; case 'u': { uint16_t unicode_char = 0; char utf8_char[4]; int i = 0; ptr++; for (i = 0; i < 4; i++) { if (qemu_isxdigit(*ptr)) { unicode_char |= hex2decimal(*ptr) << ((3 - i) * 4); } else { parse_error(ctxt, token, "invalid hex escape sequence in string"); goto out; } ptr++; } wchar_to_utf8(unicode_char, utf8_char, sizeof(utf8_char)); qstring_append(str, utf8_char); } break; default: parse_error(ctxt, token, "invalid escape sequence in string"); goto out; } } else { char dummy[2]; dummy[0] = *ptr++; dummy[1] = 0; qstring_append(str, dummy); } } return str; out: QDECREF(str); return NULL; }
1threat
def check_monthnumber_number(monthnum3): if(monthnum3==4 or monthnum3==6 or monthnum3==9 or monthnum3==11): return True else: return False
0debug
Work Around For Whitespaces : <p>How can I get rid of whitespaces when I don't need them?</p> <p>I have to check checboxes then add them to a listbox then pass them to a textbox(multiline). Then of course I have to put at the end Listbox1.Items.Add(Environment.NewLine);</p> <p>This make the items on the textbox separated. But I want that whitespace to be gone when I'm going to process the data. How to remove it?</p>
0debug
difference between v1 and v2 of Erik's css-reset : <p>I am comparing the difference between Erik's css reset at <a href="https://meyerweb.com/eric/tools/css/reset/index.html" rel="nofollow noreferrer">https://meyerweb.com/eric/tools/css/reset/index.html</a></p> <p>I notice that v2 doesn't reset <code>outline</code>, <code>background</code>. Why?</p>
0debug
static inline void tcg_temp_free_internal(int idx) { TCGContext *s = &tcg_ctx; TCGTemp *ts; int k; assert(idx >= s->nb_globals && idx < s->nb_temps); ts = &s->temps[idx]; assert(ts->temp_allocated != 0); ts->temp_allocated = 0; k = ts->base_type; if (ts->temp_local) k += TCG_TYPE_COUNT; ts->next_free_temp = s->first_free_temp[k]; s->first_free_temp[k] = idx;
1threat
Different representation of UUID in Java Hibernate and SQL Server : <p>I am trying to map a <code>UUID</code> column in POJO to SQL Server table column using Hibernate.</p> <p>The annotations are applied as follows:</p> <pre><code>@Id @GeneratedValue @Column(name = "Id", columnDefinition = "uniqueidentifier") public UUID getId(){ ... } </code></pre> <p>However, it seems that there is some endianness problem between the Java Hibernate mapping and SQL server.</p> <p>For example, in my Java app, I have ids represented as:</p> <pre><code>4375CF8E-DEF5-43F6-92F3-074D34A4CE35 ADE3DAF8-A62B-4CE2-9D8C-B4E4A54E3DA1 </code></pre> <p>whereas in SQL Server, these are represented as:</p> <pre><code>8ECF7543-F5DE-F643-92F3-074D34A4CE35 F8DAE3AD-2BA6-E24C-9D8C-B4E4A54E3DA1 </code></pre> <p>Is there any way to have same representation at both sides?</p> <p>Please note that <code>uniqueidentifier</code> is used only to have a <code>uniqueidentifier</code> typed id in SQL server instead of type <code>binary</code>; the same problem exists when <code>uniqueidentifier</code> is removed from annotation (the problem can be observed by converting <code>binary</code> is to <code>uniqueidentifier</code>).</p>
0debug
static void test_redirector_rx(void) { int backend_sock[2], send_sock; char *cmdline; uint32_t ret = 0, len = 0; char send_buf[] = "Hello!!"; char sock_path0[] = "filter-redirector0.XXXXXX"; char sock_path1[] = "filter-redirector1.XXXXXX"; char *recv_buf; uint32_t size = sizeof(send_buf); size = htonl(size); ret = socketpair(PF_UNIX, SOCK_STREAM, 0, backend_sock); g_assert_cmpint(ret, !=, -1); ret = mkstemp(sock_path0); g_assert_cmpint(ret, !=, -1); ret = mkstemp(sock_path1); g_assert_cmpint(ret, !=, -1); cmdline = g_strdup_printf("-netdev socket,id=qtest-bn0,fd=%d " "-device rtl8139,netdev=qtest-bn0,id=qtest-e0 " "-chardev socket,id=redirector0,path=%s,server,nowait " "-chardev socket,id=redirector1,path=%s,server,nowait " "-chardev socket,id=redirector2,path=%s,nowait " "-object filter-redirector,id=qtest-f0,netdev=qtest-bn0," "queue=rx,indev=redirector0 " "-object filter-redirector,id=qtest-f1,netdev=qtest-bn0," "queue=rx,outdev=redirector2 " "-object filter-redirector,id=qtest-f2,netdev=qtest-bn0," "queue=rx,indev=redirector1 " , backend_sock[1], sock_path0, sock_path1, sock_path0); qtest_start(cmdline); g_free(cmdline); struct iovec iov[] = { { .iov_base = &size, .iov_len = sizeof(size), }, { .iov_base = send_buf, .iov_len = sizeof(send_buf), }, }; send_sock = unix_connect(sock_path1, NULL); g_assert_cmpint(send_sock, !=, -1); qmp_discard_response("{ 'execute' : 'query-status'}"); ret = iov_send(send_sock, iov, 2, 0, sizeof(size) + sizeof(send_buf)); g_assert_cmpint(ret, ==, sizeof(send_buf) + sizeof(size)); close(send_sock); ret = qemu_recv(backend_sock[0], &len, sizeof(len), 0); g_assert_cmpint(ret, ==, sizeof(len)); len = ntohl(len); g_assert_cmpint(len, ==, sizeof(send_buf)); recv_buf = g_malloc(len); ret = qemu_recv(backend_sock[0], recv_buf, len, 0); g_assert_cmpstr(recv_buf, ==, send_buf); g_free(recv_buf); unlink(sock_path0); unlink(sock_path1); qtest_end(); }
1threat
static int adpcm_decode_frame(AVCodecContext *avctx, void *data, int *data_size, AVPacket *avpkt) { const uint8_t *buf = avpkt->data; int buf_size = avpkt->size; ADPCMDecodeContext *c = avctx->priv_data; ADPCMChannelStatus *cs; int n, m, channel, i; int block_predictor[2]; short *samples; short *samples_end; const uint8_t *src; int st; unsigned char last_byte = 0; unsigned char nibble; int decode_top_nibble_next = 0; int diff_channel; uint32_t samples_in_chunk; int32_t previous_left_sample, previous_right_sample; int32_t current_left_sample, current_right_sample; int32_t next_left_sample, next_right_sample; int32_t coeff1l, coeff2l, coeff1r, coeff2r; uint8_t shift_left, shift_right; int count1, count2; int coeff[2][2], shift[2]; if (!buf_size) return 0; if(*data_size/4 < buf_size + 8) return -1; samples = data; samples_end= samples + *data_size/2; *data_size= 0; src = buf; st = avctx->channels == 2 ? 1 : 0; switch(avctx->codec->id) { case CODEC_ID_ADPCM_IMA_QT: n = buf_size - 2*avctx->channels; for (channel = 0; channel < avctx->channels; channel++) { int16_t predictor; int step_index; cs = &(c->status[channel]); predictor = AV_RB16(src); step_index = predictor & 0x7F; predictor &= 0xFF80; src += 2; if (cs->step_index == step_index) { int diff = (int)predictor - cs->predictor; if (diff < 0) diff = - diff; if (diff > 0x7f) goto update; } else { update: cs->step_index = step_index; cs->predictor = predictor; } if (cs->step_index > 88){ av_log(avctx, AV_LOG_ERROR, "ERROR: step_index = %i\n", cs->step_index); cs->step_index = 88; } samples = (short*)data + channel; for(m=32; n>0 && m>0; n--, m--) { *samples = adpcm_ima_qt_expand_nibble(cs, src[0] & 0x0F, 3); samples += avctx->channels; *samples = adpcm_ima_qt_expand_nibble(cs, src[0] >> 4 , 3); samples += avctx->channels; src ++; } } if (st) samples--; break; case CODEC_ID_ADPCM_IMA_WAV: if (avctx->block_align != 0 && buf_size > avctx->block_align) buf_size = avctx->block_align; samples_per_block= (block_align-4*chanels)*8 / (bits_per_sample * chanels) + 1; for(i=0; i<avctx->channels; i++){ cs = &(c->status[i]); cs->predictor = *samples++ = (int16_t)bytestream_get_le16(&src); cs->step_index = *src++; if (cs->step_index > 88){ av_log(avctx, AV_LOG_ERROR, "ERROR: step_index = %i\n", cs->step_index); cs->step_index = 88; } if (*src++) av_log(avctx, AV_LOG_ERROR, "unused byte should be null but is %d!!\n", src[-1]); } while(src < buf + buf_size){ for(m=0; m<4; m++){ for(i=0; i<=st; i++) *samples++ = adpcm_ima_expand_nibble(&c->status[i], src[4*i] & 0x0F, 3); for(i=0; i<=st; i++) *samples++ = adpcm_ima_expand_nibble(&c->status[i], src[4*i] >> 4 , 3); src++; } src += 4*st; } break; case CODEC_ID_ADPCM_4XM: cs = &(c->status[0]); c->status[0].predictor= (int16_t)bytestream_get_le16(&src); if(st){ c->status[1].predictor= (int16_t)bytestream_get_le16(&src); } c->status[0].step_index= (int16_t)bytestream_get_le16(&src); if(st){ c->status[1].step_index= (int16_t)bytestream_get_le16(&src); } if (cs->step_index < 0) cs->step_index = 0; if (cs->step_index > 88) cs->step_index = 88; m= (buf_size - (src - buf))>>st; for(i=0; i<m; i++) { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[i] & 0x0F, 4); if (st) *samples++ = adpcm_ima_expand_nibble(&c->status[1], src[i+m] & 0x0F, 4); *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[i] >> 4, 4); if (st) *samples++ = adpcm_ima_expand_nibble(&c->status[1], src[i+m] >> 4, 4); } src += m<<st; break; case CODEC_ID_ADPCM_MS: if (avctx->block_align != 0 && buf_size > avctx->block_align) buf_size = avctx->block_align; n = buf_size - 7 * avctx->channels; if (n < 0) return -1; block_predictor[0] = av_clip(*src++, 0, 6); block_predictor[1] = 0; if (st) block_predictor[1] = av_clip(*src++, 0, 6); c->status[0].idelta = (int16_t)bytestream_get_le16(&src); if (st){ c->status[1].idelta = (int16_t)bytestream_get_le16(&src); } c->status[0].coeff1 = ff_adpcm_AdaptCoeff1[block_predictor[0]]; c->status[0].coeff2 = ff_adpcm_AdaptCoeff2[block_predictor[0]]; c->status[1].coeff1 = ff_adpcm_AdaptCoeff1[block_predictor[1]]; c->status[1].coeff2 = ff_adpcm_AdaptCoeff2[block_predictor[1]]; c->status[0].sample1 = bytestream_get_le16(&src); if (st) c->status[1].sample1 = bytestream_get_le16(&src); c->status[0].sample2 = bytestream_get_le16(&src); if (st) c->status[1].sample2 = bytestream_get_le16(&src); *samples++ = c->status[0].sample2; if (st) *samples++ = c->status[1].sample2; *samples++ = c->status[0].sample1; if (st) *samples++ = c->status[1].sample1; for(;n>0;n--) { *samples++ = adpcm_ms_expand_nibble(&c->status[0 ], src[0] >> 4 ); *samples++ = adpcm_ms_expand_nibble(&c->status[st], src[0] & 0x0F); src ++; } break; case CODEC_ID_ADPCM_IMA_DK4: if (avctx->block_align != 0 && buf_size > avctx->block_align) buf_size = avctx->block_align; c->status[0].predictor = (int16_t)bytestream_get_le16(&src); c->status[0].step_index = *src++; src++; *samples++ = c->status[0].predictor; if (st) { c->status[1].predictor = (int16_t)bytestream_get_le16(&src); c->status[1].step_index = *src++; src++; *samples++ = c->status[1].predictor; } while (src < buf + buf_size) { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] >> 4, 3); if (st) *samples++ = adpcm_ima_expand_nibble(&c->status[1], src[0] & 0x0F, 3); else *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] & 0x0F, 3); src++; } break; case CODEC_ID_ADPCM_IMA_DK3: if (avctx->block_align != 0 && buf_size > avctx->block_align) buf_size = avctx->block_align; if(buf_size + 16 > (samples_end - samples)*3/8) return -1; c->status[0].predictor = (int16_t)AV_RL16(src + 10); c->status[1].predictor = (int16_t)AV_RL16(src + 12); c->status[0].step_index = src[14]; c->status[1].step_index = src[15]; src += 16; diff_channel = c->status[1].predictor; while (1) { DK3_GET_NEXT_NIBBLE(); adpcm_ima_expand_nibble(&c->status[0], nibble, 3); DK3_GET_NEXT_NIBBLE(); adpcm_ima_expand_nibble(&c->status[1], nibble, 3); diff_channel = (diff_channel + c->status[1].predictor) / 2; *samples++ = c->status[0].predictor + c->status[1].predictor; *samples++ = c->status[0].predictor - c->status[1].predictor; DK3_GET_NEXT_NIBBLE(); adpcm_ima_expand_nibble(&c->status[0], nibble, 3); diff_channel = (diff_channel + c->status[1].predictor) / 2; *samples++ = c->status[0].predictor + c->status[1].predictor; *samples++ = c->status[0].predictor - c->status[1].predictor; } break; case CODEC_ID_ADPCM_IMA_ISS: c->status[0].predictor = (int16_t)AV_RL16(src + 0); c->status[0].step_index = src[2]; src += 4; if(st) { c->status[1].predictor = (int16_t)AV_RL16(src + 0); c->status[1].step_index = src[2]; src += 4; } while (src < buf + buf_size) { if (st) { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] >> 4 , 3); *samples++ = adpcm_ima_expand_nibble(&c->status[1], src[0] & 0x0F, 3); } else { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] & 0x0F, 3); *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] >> 4 , 3); } src++; } break; case CODEC_ID_ADPCM_IMA_WS: while (src < buf + buf_size) { if (st) { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] >> 4 , 3); *samples++ = adpcm_ima_expand_nibble(&c->status[1], src[0] & 0x0F, 3); } else { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] >> 4 , 3); *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] & 0x0F, 3); } src++; } break; case CODEC_ID_ADPCM_XA: while (buf_size >= 128) { xa_decode(samples, src, &c->status[0], &c->status[1], avctx->channels); src += 128; samples += 28 * 8; buf_size -= 128; } break; case CODEC_ID_ADPCM_IMA_EA_EACS: samples_in_chunk = bytestream_get_le32(&src) >> (1-st); if (samples_in_chunk > buf_size-4-(8<<st)) { src += buf_size - 4; break; } for (i=0; i<=st; i++) c->status[i].step_index = bytestream_get_le32(&src); for (i=0; i<=st; i++) c->status[i].predictor = bytestream_get_le32(&src); for (; samples_in_chunk; samples_in_chunk--, src++) { *samples++ = adpcm_ima_expand_nibble(&c->status[0], *src>>4, 3); *samples++ = adpcm_ima_expand_nibble(&c->status[st], *src&0x0F, 3); } break; case CODEC_ID_ADPCM_IMA_EA_SEAD: for (; src < buf+buf_size; src++) { *samples++ = adpcm_ima_expand_nibble(&c->status[0], src[0] >> 4, 6); *samples++ = adpcm_ima_expand_nibble(&c->status[st],src[0]&0x0F, 6); } break; case CODEC_ID_ADPCM_EA: if (buf_size < 12) { av_log(avctx, AV_LOG_ERROR, "frame too small\n"); return AVERROR(EINVAL); } samples_in_chunk = AV_RL32(src); if (samples_in_chunk / 28 > (buf_size - 12) / 30) { av_log(avctx, AV_LOG_ERROR, "invalid frame\n"); return AVERROR(EINVAL); } src += 4; current_left_sample = (int16_t)bytestream_get_le16(&src); previous_left_sample = (int16_t)bytestream_get_le16(&src); current_right_sample = (int16_t)bytestream_get_le16(&src); previous_right_sample = (int16_t)bytestream_get_le16(&src); for (count1 = 0; count1 < samples_in_chunk/28;count1++) { coeff1l = ea_adpcm_table[ *src >> 4 ]; coeff2l = ea_adpcm_table[(*src >> 4 ) + 4]; coeff1r = ea_adpcm_table[*src & 0x0F]; coeff2r = ea_adpcm_table[(*src & 0x0F) + 4]; src++; shift_left = (*src >> 4 ) + 8; shift_right = (*src & 0x0F) + 8; src++; for (count2 = 0; count2 < 28; count2++) { next_left_sample = (int32_t)((*src & 0xF0) << 24) >> shift_left; next_right_sample = (int32_t)((*src & 0x0F) << 28) >> shift_right; src++; next_left_sample = (next_left_sample + (current_left_sample * coeff1l) + (previous_left_sample * coeff2l) + 0x80) >> 8; next_right_sample = (next_right_sample + (current_right_sample * coeff1r) + (previous_right_sample * coeff2r) + 0x80) >> 8; previous_left_sample = current_left_sample; current_left_sample = av_clip_int16(next_left_sample); previous_right_sample = current_right_sample; current_right_sample = av_clip_int16(next_right_sample); *samples++ = (unsigned short)current_left_sample; *samples++ = (unsigned short)current_right_sample; } } if (src - buf == buf_size - 2) src += 2; Skip terminating 0x0000 break; case CODEC_ID_ADPCM_EA_MAXIS_XA: for(channel = 0; channel < avctx->channels; channel++) { for (i=0; i<2; i++) coeff[channel][i] = ea_adpcm_table[(*src >> 4) + 4*i]; shift[channel] = (*src & 0x0F) + 8; src++; } for (count1 = 0; count1 < (buf_size - avctx->channels) / avctx->channels; count1++) { for(i = 4; i >= 0; i-=4) { for(channel = 0; channel < avctx->channels; channel++) { int32_t sample = (int32_t)(((*(src+channel) >> i) & 0x0F) << 0x1C) >> shift[channel]; sample = (sample + c->status[channel].sample1 * coeff[channel][0] + c->status[channel].sample2 * coeff[channel][1] + 0x80) >> 8; c->status[channel].sample2 = c->status[channel].sample1; c->status[channel].sample1 = av_clip_int16(sample); *samples++ = c->status[channel].sample1; } } src+=avctx->channels; } break; case CODEC_ID_ADPCM_EA_R1: case CODEC_ID_ADPCM_EA_R2: case CODEC_ID_ADPCM_EA_R3: { const int big_endian = avctx->codec->id == CODEC_ID_ADPCM_EA_R3; int32_t previous_sample, current_sample, next_sample; int32_t coeff1, coeff2; uint8_t shift; unsigned int channel; uint16_t *samplesC; const uint8_t *srcC; const uint8_t *src_end = buf + buf_size; samples_in_chunk = (big_endian ? bytestream_get_be32(&src) : bytestream_get_le32(&src)) / 28; if (samples_in_chunk > UINT32_MAX/(28*avctx->channels) || 28*samples_in_chunk*avctx->channels > samples_end-samples) { src += buf_size - 4; break; } for (channel=0; channel<avctx->channels; channel++) { int32_t offset = (big_endian ? bytestream_get_be32(&src) : bytestream_get_le32(&src)) + (avctx->channels-channel-1) * 4; if ((offset < 0) || (offset >= src_end - src - 4)) break; srcC = src + offset; samplesC = samples + channel; if (avctx->codec->id == CODEC_ID_ADPCM_EA_R1) { current_sample = (int16_t)bytestream_get_le16(&srcC); previous_sample = (int16_t)bytestream_get_le16(&srcC); } else { current_sample = c->status[channel].predictor; previous_sample = c->status[channel].prev_sample; } for (count1=0; count1<samples_in_chunk; count1++) { if (*srcC == 0xEE) { srcC++; if (srcC > src_end - 30*2) break; current_sample = (int16_t)bytestream_get_be16(&srcC); previous_sample = (int16_t)bytestream_get_be16(&srcC); for (count2=0; count2<28; count2++) { *samplesC = (int16_t)bytestream_get_be16(&srcC); samplesC += avctx->channels; } } else { coeff1 = ea_adpcm_table[ *srcC>>4 ]; coeff2 = ea_adpcm_table[(*srcC>>4) + 4]; shift = (*srcC++ & 0x0F) + 8; if (srcC > src_end - 14) break; for (count2=0; count2<28; count2++) { if (count2 & 1) next_sample = (int32_t)((*srcC++ & 0x0F) << 28) >> shift; else next_sample = (int32_t)((*srcC & 0xF0) << 24) >> shift; next_sample += (current_sample * coeff1) + (previous_sample * coeff2); next_sample = av_clip_int16(next_sample >> 8); previous_sample = current_sample; current_sample = next_sample; *samplesC = current_sample; samplesC += avctx->channels; } } } if (avctx->codec->id != CODEC_ID_ADPCM_EA_R1) { c->status[channel].predictor = current_sample; c->status[channel].prev_sample = previous_sample; } } src = src + buf_size - (4 + 4*avctx->channels); samples += 28 * samples_in_chunk * avctx->channels; break; } case CODEC_ID_ADPCM_EA_XAS: if (samples_end-samples < 32*4*avctx->channels || buf_size < (4+15)*4*avctx->channels) { src += buf_size; break; } for (channel=0; channel<avctx->channels; channel++) { int coeff[2][4], shift[4]; short *s2, *s = &samples[channel]; for (n=0; n<4; n++, s+=32*avctx->channels) { for (i=0; i<2; i++) coeff[i][n] = ea_adpcm_table[(src[0]&0x0F)+4*i]; shift[n] = (src[2]&0x0F) + 8; for (s2=s, i=0; i<2; i++, src+=2, s2+=avctx->channels) s2[0] = (src[0]&0xF0) + (src[1]<<8); } for (m=2; m<32; m+=2) { s = &samples[m*avctx->channels + channel]; for (n=0; n<4; n++, src++, s+=32*avctx->channels) { for (s2=s, i=0; i<8; i+=4, s2+=avctx->channels) { int level = (int32_t)((*src & (0xF0>>i)) << (24+i)) >> shift[n]; int pred = s2[-1*avctx->channels] * coeff[0][n] + s2[-2*avctx->channels] * coeff[1][n]; s2[0] = av_clip_int16((level + pred + 0x80) >> 8); } } } } samples += 32*4*avctx->channels; break; case CODEC_ID_ADPCM_IMA_AMV: case CODEC_ID_ADPCM_IMA_SMJPEG: c->status[0].predictor = (int16_t)bytestream_get_le16(&src); c->status[0].step_index = bytestream_get_le16(&src); if (avctx->codec->id == CODEC_ID_ADPCM_IMA_AMV) src+=4; while (src < buf + buf_size) { char hi, lo; lo = *src & 0x0F; hi = *src >> 4; if (avctx->codec->id == CODEC_ID_ADPCM_IMA_AMV) FFSWAP(char, hi, lo); *samples++ = adpcm_ima_expand_nibble(&c->status[0], lo, 3); *samples++ = adpcm_ima_expand_nibble(&c->status[0], hi, 3); src++; } break; case CODEC_ID_ADPCM_CT: while (src < buf + buf_size) { if (st) { *samples++ = adpcm_ct_expand_nibble(&c->status[0], src[0] >> 4); *samples++ = adpcm_ct_expand_nibble(&c->status[1], src[0] & 0x0F); } else { *samples++ = adpcm_ct_expand_nibble(&c->status[0], src[0] >> 4); *samples++ = adpcm_ct_expand_nibble(&c->status[0], src[0] & 0x0F); } src++; } break; case CODEC_ID_ADPCM_SBPRO_4: case CODEC_ID_ADPCM_SBPRO_3: case CODEC_ID_ADPCM_SBPRO_2: if (!c->status[0].step_index) { *samples++ = 128 * (*src++ - 0x80); if (st) *samples++ = 128 * (*src++ - 0x80); c->status[0].step_index = 1; } if (avctx->codec->id == CODEC_ID_ADPCM_SBPRO_4) { while (src < buf + buf_size) { *samples++ = adpcm_sbpro_expand_nibble(&c->status[0], src[0] >> 4, 4, 0); *samples++ = adpcm_sbpro_expand_nibble(&c->status[st], src[0] & 0x0F, 4, 0); src++; } } else if (avctx->codec->id == CODEC_ID_ADPCM_SBPRO_3) { while (src < buf + buf_size && samples + 2 < samples_end) { *samples++ = adpcm_sbpro_expand_nibble(&c->status[0], src[0] >> 5 , 3, 0); *samples++ = adpcm_sbpro_expand_nibble(&c->status[0], (src[0] >> 2) & 0x07, 3, 0); *samples++ = adpcm_sbpro_expand_nibble(&c->status[0], src[0] & 0x03, 2, 0); src++; } } else { while (src < buf + buf_size && samples + 3 < samples_end) { *samples++ = adpcm_sbpro_expand_nibble(&c->status[0], src[0] >> 6 , 2, 2); *samples++ = adpcm_sbpro_expand_nibble(&c->status[st], (src[0] >> 4) & 0x03, 2, 2); *samples++ = adpcm_sbpro_expand_nibble(&c->status[0], (src[0] >> 2) & 0x03, 2, 2); *samples++ = adpcm_sbpro_expand_nibble(&c->status[st], src[0] & 0x03, 2, 2); src++; } } break; case CODEC_ID_ADPCM_SWF: { GetBitContext gb; const int *table; int k0, signmask, nb_bits, count; int size = buf_size*8; init_get_bits(&gb, buf, size); read bits & initial values nb_bits = get_bits(&gb, 2)+2; av_log(NULL,AV_LOG_INFO,"nb_bits: %d\n", nb_bits); table = swf_index_tables[nb_bits-2]; k0 = 1 << (nb_bits-2); signmask = 1 << (nb_bits-1); while (get_bits_count(&gb) <= size - 22*avctx->channels) { for (i = 0; i < avctx->channels; i++) { *samples++ = c->status[i].predictor = get_sbits(&gb, 16); c->status[i].step_index = get_bits(&gb, 6); } for (count = 0; get_bits_count(&gb) <= size - nb_bits*avctx->channels && count < 4095; count++) { int i; for (i = 0; i < avctx->channels; i++) { similar to IMA adpcm int delta = get_bits(&gb, nb_bits); int step = ff_adpcm_step_table[c->status[i].step_index]; long vpdiff = 0; vpdiff = (delta+0.5)*step/4 int k = k0; do { if (delta & k) vpdiff += step; step >>= 1; k >>= 1; } while(k); vpdiff += step; if (delta & signmask) c->status[i].predictor -= vpdiff; else c->status[i].predictor += vpdiff; c->status[i].step_index += table[delta & (~signmask)]; c->status[i].step_index = av_clip(c->status[i].step_index, 0, 88); c->status[i].predictor = av_clip_int16(c->status[i].predictor); *samples++ = c->status[i].predictor; if (samples >= samples_end) { av_log(avctx, AV_LOG_ERROR, "allocated output buffer is too small\n"); return -1; } } } } src += buf_size; break; } case CODEC_ID_ADPCM_YAMAHA: while (src < buf + buf_size) { if (st) { *samples++ = adpcm_yamaha_expand_nibble(&c->status[0], src[0] & 0x0F); *samples++ = adpcm_yamaha_expand_nibble(&c->status[1], src[0] >> 4 ); } else { *samples++ = adpcm_yamaha_expand_nibble(&c->status[0], src[0] & 0x0F); *samples++ = adpcm_yamaha_expand_nibble(&c->status[0], src[0] >> 4 ); } src++; } break; case CODEC_ID_ADPCM_THP: { int table[2][16]; unsigned int samplecnt; int prev[2][2]; int ch; if (buf_size < 80) { av_log(avctx, AV_LOG_ERROR, "frame too small\n"); return -1; } src+=4; samplecnt = bytestream_get_be32(&src); for (i = 0; i < 32; i++) table[0][i] = (int16_t)bytestream_get_be16(&src); for (i = 0; i < 4; i++) prev[0][i] = (int16_t)bytestream_get_be16(&src); if (samplecnt >= (samples_end - samples) / (st + 1)) { av_log(avctx, AV_LOG_ERROR, "allocated output buffer is too small\n"); return -1; } for (ch = 0; ch <= st; ch++) { samples = (unsigned short *) data + ch; for (i = 0; i < samplecnt / 14; i++) { int index = (*src >> 4) & 7; unsigned int exp = 28 - (*src++ & 15); int factor1 = table[ch][index * 2]; int factor2 = table[ch][index * 2 + 1]; for (n = 0; n < 14; n++) { int32_t sampledat; if(n&1) sampledat= *src++ <<28; else sampledat= (*src&0xF0)<<24; sampledat = ((prev[ch][0]*factor1 + prev[ch][1]*factor2) >> 11) + (sampledat>>exp); *samples = av_clip_int16(sampledat); prev[ch][1] = prev[ch][0]; prev[ch][0] = *samples++; samples += st; } } } samples -= st; break; } default: return -1; } *data_size = (uint8_t *)samples - (uint8_t *)data; return src - buf; }
1threat
static inline uint8_t mipsdsp_lshift8(uint8_t a, uint8_t s, CPUMIPSState *env) { uint8_t sign; uint8_t discard; if (s == 0) { return a; } else { sign = (a >> 7) & 0x01; if (sign != 0) { discard = (((0x01 << (8 - s)) - 1) << s) | ((a >> (6 - (s - 1))) & ((0x01 << s) - 1)); } else { discard = a >> (6 - (s - 1)); } if (discard != 0x00) { set_DSPControl_overflow_flag(1, 22, env); } return a << s; } }
1threat
int av_read_frame(AVFormatContext *s, AVPacket *pkt) { const int genpts = s->flags & AVFMT_FLAG_GENPTS; int eof = 0; int ret; AVStream *st; if (!genpts) { ret = s->packet_buffer ? read_from_packet_buffer(&s->packet_buffer, &s->packet_buffer_end, pkt) : read_frame_internal(s, pkt); if (ret < 0) return ret; goto return_packet; } for (;;) { AVPacketList *pktl = s->packet_buffer; if (pktl) { AVPacket *next_pkt = &pktl->pkt; if (next_pkt->dts != AV_NOPTS_VALUE) { int wrap_bits = s->streams[next_pkt->stream_index]->pts_wrap_bits; int64_t last_dts = next_pkt->dts; while (pktl && next_pkt->pts == AV_NOPTS_VALUE) { if (pktl->pkt.stream_index == next_pkt->stream_index && (av_compare_mod(next_pkt->dts, pktl->pkt.dts, 2LL << (wrap_bits - 1)) < 0)) { if (av_compare_mod(pktl->pkt.pts, pktl->pkt.dts, 2LL << (wrap_bits - 1))) { next_pkt->pts = pktl->pkt.dts; } if (last_dts != AV_NOPTS_VALUE) { last_dts = pktl->pkt.dts; } } pktl = pktl->next; } if (eof && next_pkt->pts == AV_NOPTS_VALUE && last_dts != AV_NOPTS_VALUE) { next_pkt->pts = last_dts + next_pkt->duration; } pktl = s->packet_buffer; } if (!(next_pkt->pts == AV_NOPTS_VALUE && next_pkt->dts != AV_NOPTS_VALUE && !eof)) { ret = read_from_packet_buffer(&s->packet_buffer, &s->packet_buffer_end, pkt); goto return_packet; } } ret = read_frame_internal(s, pkt); if (ret < 0) { if (pktl && ret != AVERROR(EAGAIN)) { eof = 1; continue; } else return ret; } if (av_dup_packet(add_to_pktbuf(&s->packet_buffer, pkt, &s->packet_buffer_end)) < 0) return AVERROR(ENOMEM); } return_packet: st = s->streams[pkt->stream_index]; if (st->skip_samples) { uint8_t *p = av_packet_new_side_data(pkt, AV_PKT_DATA_SKIP_SAMPLES, 10); AV_WL32(p, st->skip_samples); av_log(s, AV_LOG_DEBUG, "demuxer injecting skip %d\n", st->skip_samples); st->skip_samples = 0; } if ((s->iformat->flags & AVFMT_GENERIC_INDEX) && pkt->flags & AV_PKT_FLAG_KEY) { ff_reduce_index(s, st->index); av_add_index_entry(st, pkt->pos, pkt->dts, 0, 0, AVINDEX_KEYFRAME); } if (is_relative(pkt->dts)) pkt->dts -= RELATIVE_TS_BASE; if (is_relative(pkt->pts)) pkt->pts -= RELATIVE_TS_BASE; return ret; }
1threat
static void uart_parameters_setup(UartState *s) { QEMUSerialSetParams ssp; unsigned int baud_rate, packet_size; baud_rate = (s->r[R_MR] & UART_MR_CLKS) ? UART_INPUT_CLK / 8 : UART_INPUT_CLK; ssp.speed = baud_rate / (s->r[R_BRGR] * (s->r[R_BDIV] + 1)); packet_size = 1; switch (s->r[R_MR] & UART_MR_PAR) { case UART_PARITY_EVEN: ssp.parity = 'E'; packet_size++; break; case UART_PARITY_ODD: ssp.parity = 'O'; packet_size++; break; default: ssp.parity = 'N'; break; } switch (s->r[R_MR] & UART_MR_CHRL) { case UART_DATA_BITS_6: ssp.data_bits = 6; break; case UART_DATA_BITS_7: ssp.data_bits = 7; break; default: ssp.data_bits = 8; break; } switch (s->r[R_MR] & UART_MR_NBSTOP) { case UART_STOP_BITS_1: ssp.stop_bits = 1; break; default: ssp.stop_bits = 2; break; } packet_size += ssp.data_bits + ssp.stop_bits; s->char_tx_time = (get_ticks_per_sec() / ssp.speed) * packet_size; qemu_chr_fe_ioctl(s->chr, CHR_IOCTL_SERIAL_SET_PARAMS, &ssp); }
1threat
count docstrings as line of code in Python : I've noticed that line of code starts from where the docstring ends. But it gives problem in error tracing because it points to different line than where error invoking line is actually present. Here is simple example to demonstrate that: #comments #comments #comments #comments #comments #comments def divide(a,b): a = int(a) #convert a to an integer b = int(b) #convert b to an integer res = a/b #calculate result return res divide(2,0) >Error ZeroDivisionError Traceback (most recent call last) <ipython-input-56-030e2eec799d> in <module>() ----> 1 divide(2,0) <ipython-input-55-9cd1ccec09c4> in divide(a, b) 4 b = int(b) 5 #convert b to an integer ----> 6 res = a/b 7 #calculate result 8 return res ZeroDivisionError: division by zero error points to line no 6 whereas actual position is `12` Is there any solution to it.
0debug
add context Array Adapter in fragment class : i have a problem in android studio i want add context in array adapter in fragment class but this is not Acceptance this is image :[enter image description here][1] [1]: https://i.stack.imgur.com/j4LhH.png and use `getContext` but not work
0debug
static int flac_parse(AVCodecParserContext *s, AVCodecContext *avctx, const uint8_t **poutbuf, int *poutbuf_size, const uint8_t *buf, int buf_size) { FLACParseContext *fpc = s->priv_data; FLACHeaderMarker *curr; int nb_headers; int read_size = 0; if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) { FLACFrameInfo fi; if (frame_header_is_valid(avctx, buf, &fi)) avctx->frame_size = fi.blocksize; *poutbuf = buf; *poutbuf_size = buf_size; return buf_size; } fpc->avctx = avctx; if (fpc->best_header_valid) return get_best_header(fpc, poutbuf, poutbuf_size); if (fpc->best_header && fpc->best_header->best_child) { FLACHeaderMarker *temp; FLACHeaderMarker *best_child = fpc->best_header->best_child; for (curr = fpc->headers; curr != best_child; curr = temp) { if (curr != fpc->best_header) { av_log(avctx, AV_LOG_DEBUG, "dropping low score %i frame header from offset %i to %i\n", curr->max_score, curr->offset, curr->next->offset); } temp = curr->next; av_freep(&curr->link_penalty); av_free(curr); fpc->nb_headers_buffered--; } av_fifo_drain(fpc->fifo_buf, best_child->offset); for (curr = best_child->next; curr; curr = curr->next) curr->offset -= best_child->offset; fpc->nb_headers_buffered--; best_child->offset = 0; fpc->headers = best_child; if (fpc->nb_headers_buffered >= FLAC_MIN_HEADERS) { fpc->best_header = best_child; return get_best_header(fpc, poutbuf, poutbuf_size); } fpc->best_header = NULL; } else if (fpc->best_header) { FLACHeaderMarker *temp; for (curr = fpc->headers; curr != fpc->best_header; curr = temp) { temp = curr->next; av_freep(&curr->link_penalty); av_free(curr); } fpc->headers = fpc->best_header->next; av_freep(&fpc->best_header->link_penalty); av_freep(&fpc->best_header); } if (buf_size || !fpc->end_padded) { int start_offset; if (!buf_size) { fpc->end_padded = 1; buf_size = read_size = MAX_FRAME_HEADER_SIZE; } else { int nb_desired = FLAC_MIN_HEADERS - fpc->nb_headers_buffered + 1; read_size = FFMIN(buf_size, nb_desired * FLAC_AVG_FRAME_SIZE); } if (av_fifo_realloc2(fpc->fifo_buf, read_size + av_fifo_size(fpc->fifo_buf)) < 0) { av_log(avctx, AV_LOG_ERROR, "couldn't reallocate buffer of size %d\n", read_size + av_fifo_size(fpc->fifo_buf)); goto handle_error; } if (buf) { av_fifo_generic_write(fpc->fifo_buf, (void*) buf, read_size, NULL); } else { int8_t pad[MAX_FRAME_HEADER_SIZE]; memset(pad, 0, sizeof(pad)); av_fifo_generic_write(fpc->fifo_buf, (void*) pad, sizeof(pad), NULL); } start_offset = av_fifo_size(fpc->fifo_buf) - (read_size + (MAX_FRAME_HEADER_SIZE - 1)); start_offset = FFMAX(0, start_offset); nb_headers = find_new_headers(fpc, start_offset); if (nb_headers < 0) { av_log(avctx, AV_LOG_ERROR, "find_new_headers couldn't allocate FLAC header\n"); goto handle_error; } fpc->nb_headers_buffered = nb_headers; if (!fpc->end_padded && fpc->nb_headers_buffered < FLAC_MIN_HEADERS) goto handle_error; if (fpc->end_padded || fpc->nb_headers_found) score_sequences(fpc); if (fpc->end_padded) { fpc->fifo_buf->wptr -= MAX_FRAME_HEADER_SIZE; fpc->fifo_buf->wndx -= MAX_FRAME_HEADER_SIZE; if (fpc->fifo_buf->wptr < 0) { fpc->fifo_buf->wptr += fpc->fifo_buf->end - fpc->fifo_buf->buffer; } buf_size = read_size = 0; } } curr = fpc->headers; for (curr = fpc->headers; curr; curr = curr->next) if (!fpc->best_header || curr->max_score > fpc->best_header->max_score) fpc->best_header = curr; if (fpc->best_header) { fpc->best_header_valid = 1; if (fpc->best_header->offset > 0) { av_log(avctx, AV_LOG_DEBUG, "Junk frame till offset %i\n", fpc->best_header->offset); avctx->frame_size = 0; *poutbuf_size = fpc->best_header->offset; *poutbuf = flac_fifo_read_wrap(fpc, 0, *poutbuf_size, &fpc->wrap_buf, &fpc->wrap_buf_allocated_size); return buf_size ? read_size : (fpc->best_header->offset - av_fifo_size(fpc->fifo_buf)); } if (!buf_size) return get_best_header(fpc, poutbuf, poutbuf_size); } handle_error: *poutbuf = NULL; *poutbuf_size = 0; return read_size; }
1threat
exception specification of overriding function is more lax than base version : <p>I want to custom an Exception class, here's the code:</p> <pre><code>class TestException : std::exception{ public: const char *what() const override { return "TestException"; } }; </code></pre> <p>I used Clion and the IDE give me a warning on the function <code>what()</code>:<code>exception specification of overriding function is more lax than base version</code></p> <p>But if I build the code with gcc, there's no warning came out. I used c++ 14, gcc 6.5.0</p> <p>Can anybody help to explain what does the warning mean and can I just ignore it?</p>
0debug
Spring Boot: @Value returns always null : <p>I would like to use a value from <code>application.properties</code> file in order to pass it in the method in another class. The problem is that the value returns always <code>NULL</code>. What could be the problem? Thanks in advance.</p> <p><strong><code>application.properties</code></strong></p> <pre><code>filesystem.directory=temp </code></pre> <p><strong><code>FileSystem.java</code></strong></p> <pre><code>@Value("${filesystem.directory}") private static String directory; </code></pre>
0debug
Which HBase connector for Spark 2.0 should I use? : <p>Our stack is composed of Google Data Proc (Spark 2.0) and Google BigTable (HBase 1.2.0) and I am looking for a connector working with these versions.</p> <p>The Spark 2.0 and the new DataSet API support is not clear to me for the connectors I have found:</p> <ul> <li><strong>spark-hbase</strong> : <a href="https://github.com/apache/hbase/tree/master/hbase-spark" rel="noreferrer">https://github.com/apache/hbase/tree/master/hbase-spark</a></li> <li><strong>spark-hbase-connector</strong> : <a href="https://github.com/nerdammer/spark-hbase-connector" rel="noreferrer">https://github.com/nerdammer/spark-hbase-connector</a></li> <li><strong>hortonworks-spark/shc</strong> : <a href="https://github.com/hortonworks-spark/shc" rel="noreferrer">https://github.com/hortonworks-spark/shc</a></li> </ul> <p>The project is written in Scala 2.11 with SBT.</p> <p>Thanks for your help</p>
0debug
int paio_init(void) { struct sigaction act; PosixAioState *s; int fds[2]; int ret; if (posix_aio_state) return 0; s = qemu_malloc(sizeof(PosixAioState)); sigfillset(&act.sa_mask); act.sa_flags = 0; act.sa_handler = aio_signal_handler; sigaction(SIGUSR2, &act, NULL); s->first_aio = NULL; if (pipe(fds) == -1) { fprintf(stderr, "failed to create pipe\n"); return -1; } s->rfd = fds[0]; s->wfd = fds[1]; fcntl(s->rfd, F_SETFL, O_NONBLOCK); fcntl(s->wfd, F_SETFL, O_NONBLOCK); qemu_aio_set_fd_handler(s->rfd, posix_aio_read, NULL, posix_aio_flush, posix_aio_process_queue, s); ret = pthread_attr_init(&attr); if (ret) die2(ret, "pthread_attr_init"); ret = pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); if (ret) die2(ret, "pthread_attr_setdetachstate"); QTAILQ_INIT(&request_list); posix_aio_state = s; return 0; }
1threat
int ff_hevc_parse_sps(HEVCSPS *sps, GetBitContext *gb, unsigned int *sps_id, int apply_defdispwin, AVBufferRef **vps_list, AVCodecContext *avctx) { HEVCWindow *ow; int ret = 0; int log2_diff_max_min_transform_block_size; int bit_depth_chroma, start, vui_present, sublayer_ordering_info; int i; sps->vps_id = get_bits(gb, 4); if (sps->vps_id >= HEVC_MAX_VPS_COUNT) { av_log(avctx, AV_LOG_ERROR, "VPS id out of range: %d\n", sps->vps_id); if (vps_list && !vps_list[sps->vps_id]) { av_log(avctx, AV_LOG_ERROR, "VPS %d does not exist\n", sps->vps_id); sps->max_sub_layers = get_bits(gb, 3) + 1; if (sps->max_sub_layers > HEVC_MAX_SUB_LAYERS) { av_log(avctx, AV_LOG_ERROR, "sps_max_sub_layers out of range: %d\n", sps->max_sub_layers); skip_bits1(gb); parse_ptl(gb, avctx, &sps->ptl, sps->max_sub_layers); *sps_id = get_ue_golomb_long(gb); if (*sps_id >= HEVC_MAX_SPS_COUNT) { av_log(avctx, AV_LOG_ERROR, "SPS id out of range: %d\n", *sps_id); sps->chroma_format_idc = get_ue_golomb_long(gb); if (sps->chroma_format_idc != 1) { avpriv_report_missing_feature(avctx, "chroma_format_idc %d", sps->chroma_format_idc); ret = AVERROR_PATCHWELCOME; if (sps->chroma_format_idc == 3) sps->separate_colour_plane_flag = get_bits1(gb); sps->width = get_ue_golomb_long(gb); sps->height = get_ue_golomb_long(gb); if ((ret = av_image_check_size(sps->width, sps->height, 0, avctx)) < 0) if (get_bits1(gb)) { sps->pic_conf_win.left_offset = get_ue_golomb_long(gb) * 2; sps->pic_conf_win.right_offset = get_ue_golomb_long(gb) * 2; sps->pic_conf_win.top_offset = get_ue_golomb_long(gb) * 2; sps->pic_conf_win.bottom_offset = get_ue_golomb_long(gb) * 2; if (avctx->flags2 & AV_CODEC_FLAG2_IGNORE_CROP) { av_log(avctx, AV_LOG_DEBUG, "discarding sps conformance window, " "original values are l:%u r:%u t:%u b:%u\n", sps->pic_conf_win.left_offset, sps->pic_conf_win.right_offset, sps->pic_conf_win.top_offset, sps->pic_conf_win.bottom_offset); sps->pic_conf_win.left_offset = sps->pic_conf_win.right_offset = sps->pic_conf_win.top_offset = sps->pic_conf_win.bottom_offset = 0; sps->output_window = sps->pic_conf_win; sps->bit_depth = get_ue_golomb_long(gb) + 8; bit_depth_chroma = get_ue_golomb_long(gb) + 8; if (bit_depth_chroma != sps->bit_depth) { av_log(avctx, AV_LOG_ERROR, "Luma bit depth (%d) is different from chroma bit depth (%d), " "this is unsupported.\n", sps->bit_depth, bit_depth_chroma); ret = map_pixel_format(avctx, sps); if (ret < 0) sps->log2_max_poc_lsb = get_ue_golomb_long(gb) + 4; if (sps->log2_max_poc_lsb > 16) { av_log(avctx, AV_LOG_ERROR, "log2_max_pic_order_cnt_lsb_minus4 out range: %d\n", sps->log2_max_poc_lsb - 4); sublayer_ordering_info = get_bits1(gb); start = sublayer_ordering_info ? 0 : sps->max_sub_layers - 1; for (i = start; i < sps->max_sub_layers; i++) { sps->temporal_layer[i].max_dec_pic_buffering = get_ue_golomb_long(gb) + 1; sps->temporal_layer[i].num_reorder_pics = get_ue_golomb_long(gb); sps->temporal_layer[i].max_latency_increase = get_ue_golomb_long(gb) - 1; if (sps->temporal_layer[i].max_dec_pic_buffering > HEVC_MAX_DPB_SIZE) { av_log(avctx, AV_LOG_ERROR, "sps_max_dec_pic_buffering_minus1 out of range: %d\n", sps->temporal_layer[i].max_dec_pic_buffering - 1); if (sps->temporal_layer[i].num_reorder_pics > sps->temporal_layer[i].max_dec_pic_buffering - 1) { av_log(avctx, AV_LOG_WARNING, "sps_max_num_reorder_pics out of range: %d\n", sps->temporal_layer[i].num_reorder_pics); if (avctx->err_recognition & AV_EF_EXPLODE || sps->temporal_layer[i].num_reorder_pics > HEVC_MAX_DPB_SIZE - 1) { sps->temporal_layer[i].max_dec_pic_buffering = sps->temporal_layer[i].num_reorder_pics + 1; if (!sublayer_ordering_info) { for (i = 0; i < start; i++) { sps->temporal_layer[i].max_dec_pic_buffering = sps->temporal_layer[start].max_dec_pic_buffering; sps->temporal_layer[i].num_reorder_pics = sps->temporal_layer[start].num_reorder_pics; sps->temporal_layer[i].max_latency_increase = sps->temporal_layer[start].max_latency_increase; sps->log2_min_cb_size = get_ue_golomb_long(gb) + 3; sps->log2_diff_max_min_coding_block_size = get_ue_golomb_long(gb); sps->log2_min_tb_size = get_ue_golomb_long(gb) + 2; log2_diff_max_min_transform_block_size = get_ue_golomb_long(gb); sps->log2_max_trafo_size = log2_diff_max_min_transform_block_size + sps->log2_min_tb_size; if (sps->log2_min_tb_size >= sps->log2_min_cb_size) { av_log(avctx, AV_LOG_ERROR, "Invalid value for log2_min_tb_size"); sps->max_transform_hierarchy_depth_inter = get_ue_golomb_long(gb); sps->max_transform_hierarchy_depth_intra = get_ue_golomb_long(gb); sps->scaling_list_enable_flag = get_bits1(gb); if (sps->scaling_list_enable_flag) { set_default_scaling_list_data(&sps->scaling_list); if (get_bits1(gb)) { ret = scaling_list_data(gb, avctx, &sps->scaling_list); if (ret < 0) sps->amp_enabled_flag = get_bits1(gb); sps->sao_enabled = get_bits1(gb); sps->pcm_enabled_flag = get_bits1(gb); if (sps->pcm_enabled_flag) { sps->pcm.bit_depth = get_bits(gb, 4) + 1; sps->pcm.bit_depth_chroma = get_bits(gb, 4) + 1; sps->pcm.log2_min_pcm_cb_size = get_ue_golomb_long(gb) + 3; sps->pcm.log2_max_pcm_cb_size = sps->pcm.log2_min_pcm_cb_size + get_ue_golomb_long(gb); if (sps->pcm.bit_depth > sps->bit_depth) { av_log(avctx, AV_LOG_ERROR, "PCM bit depth (%d) is greater than normal bit depth (%d)\n", sps->pcm.bit_depth, sps->bit_depth); sps->pcm.loop_filter_disable_flag = get_bits1(gb); sps->nb_st_rps = get_ue_golomb_long(gb); if (sps->nb_st_rps > HEVC_MAX_SHORT_TERM_REF_PIC_SETS) { av_log(avctx, AV_LOG_ERROR, "Too many short term RPS: %d.\n", sps->nb_st_rps); for (i = 0; i < sps->nb_st_rps; i++) { if ((ret = ff_hevc_decode_short_term_rps(gb, avctx, &sps->st_rps[i], sps, 0)) < 0) sps->long_term_ref_pics_present_flag = get_bits1(gb); if (sps->long_term_ref_pics_present_flag) { sps->num_long_term_ref_pics_sps = get_ue_golomb_long(gb); for (i = 0; i < sps->num_long_term_ref_pics_sps; i++) { sps->lt_ref_pic_poc_lsb_sps[i] = get_bits(gb, sps->log2_max_poc_lsb); sps->used_by_curr_pic_lt_sps_flag[i] = get_bits1(gb); sps->sps_temporal_mvp_enabled_flag = get_bits1(gb); sps->sps_strong_intra_smoothing_enable_flag = get_bits1(gb); sps->vui.sar = (AVRational){0, 1}; vui_present = get_bits1(gb); if (vui_present) decode_vui(gb, avctx, apply_defdispwin, sps); skip_bits1(gb); if (apply_defdispwin) { sps->output_window.left_offset += sps->vui.def_disp_win.left_offset; sps->output_window.right_offset += sps->vui.def_disp_win.right_offset; sps->output_window.top_offset += sps->vui.def_disp_win.top_offset; sps->output_window.bottom_offset += sps->vui.def_disp_win.bottom_offset; ow = &sps->output_window; if (ow->left_offset >= INT_MAX - ow->right_offset || ow->top_offset >= INT_MAX - ow->bottom_offset || ow->left_offset + ow->right_offset >= sps->width || ow->top_offset + ow->bottom_offset >= sps->height) { av_log(avctx, AV_LOG_WARNING, "Invalid cropping offsets: %u/%u/%u/%u\n", ow->left_offset, ow->right_offset, ow->top_offset, ow->bottom_offset); if (avctx->err_recognition & AV_EF_EXPLODE) { av_log(avctx, AV_LOG_WARNING, "Displaying the whole video surface.\n"); memset(ow, 0, sizeof(*ow)); sps->log2_ctb_size = sps->log2_min_cb_size + sps->log2_diff_max_min_coding_block_size; sps->log2_min_pu_size = sps->log2_min_cb_size - 1; sps->ctb_width = (sps->width + (1 << sps->log2_ctb_size) - 1) >> sps->log2_ctb_size; sps->ctb_height = (sps->height + (1 << sps->log2_ctb_size) - 1) >> sps->log2_ctb_size; sps->ctb_size = sps->ctb_width * sps->ctb_height; sps->min_cb_width = sps->width >> sps->log2_min_cb_size; sps->min_cb_height = sps->height >> sps->log2_min_cb_size; sps->min_tb_width = sps->width >> sps->log2_min_tb_size; sps->min_tb_height = sps->height >> sps->log2_min_tb_size; sps->min_pu_width = sps->width >> sps->log2_min_pu_size; sps->min_pu_height = sps->height >> sps->log2_min_pu_size; sps->qp_bd_offset = 6 * (sps->bit_depth - 8); if (sps->width & ((1 << sps->log2_min_cb_size) - 1) || sps->height & ((1 << sps->log2_min_cb_size) - 1)) { av_log(avctx, AV_LOG_ERROR, "Invalid coded frame dimensions.\n"); if (sps->log2_ctb_size > HEVC_MAX_LOG2_CTB_SIZE) { av_log(avctx, AV_LOG_ERROR, "CTB size out of range: 2^%d\n", sps->log2_ctb_size); if (sps->max_transform_hierarchy_depth_inter > sps->log2_ctb_size - sps->log2_min_tb_size) { av_log(avctx, AV_LOG_ERROR, "max_transform_hierarchy_depth_inter out of range: %d\n", sps->max_transform_hierarchy_depth_inter); if (sps->max_transform_hierarchy_depth_intra > sps->log2_ctb_size - sps->log2_min_tb_size) { av_log(avctx, AV_LOG_ERROR, "max_transform_hierarchy_depth_intra out of range: %d\n", sps->max_transform_hierarchy_depth_intra); if (sps->log2_max_trafo_size > FFMIN(sps->log2_ctb_size, 5)) { av_log(avctx, AV_LOG_ERROR, "max transform block size out of range: %d\n", sps->log2_max_trafo_size); return 0; err: return ret < 0 ? ret : AVERROR_INVALIDDATA;
1threat
How do I get the side of div to get up : How do I get the side of div to get up? The div is like this before I move the point over it: [enter image description here][1] And when I move the point over the div,it like this: [enter image description here][2] How should I write the HTML and CSS code? [1]: https://i.stack.imgur.com/UafDq.png [2]: https://i.stack.imgur.com/kG3bc.png
0debug
What is " [1] ", in sock.getsockname()[1]? : <p>I was going through socket programming in python and I saw this :</p> <p>sock.getsockname()[1] , can anyone please explain what is that "[1]" for ?</p>
0debug
static void menelaus_rtc_hz(void *opaque) { struct menelaus_s *s = (struct menelaus_s *) opaque; s->rtc.next_comp --; s->rtc.alm_sec --; s->rtc.next += 1000; qemu_mod_timer(s->rtc.hz, s->rtc.next); if ((s->rtc.ctrl >> 3) & 3) { menelaus_rtc_update(s); if (((s->rtc.ctrl >> 3) & 3) == 1 && !s->rtc.tm.tm_sec) s->status |= 1 << 8; else if (((s->rtc.ctrl >> 3) & 3) == 2 && !s->rtc.tm.tm_min) s->status |= 1 << 8; else if (!s->rtc.tm.tm_hour) s->status |= 1 << 8; } else s->status |= 1 << 8; if ((s->rtc.ctrl >> 1) & 1) { if (s->rtc.alm_sec == 0) s->status |= 1 << 9; } if (s->rtc.next_comp <= 0) { s->rtc.next -= muldiv64((int16_t) s->rtc.comp, 1000, 0x8000); s->rtc.next_comp = 3600; } menelaus_update(s); }
1threat
How to make the search function on a website : <p>I was just wondering how do I make my website preview some info after the user searches for something stored in the database. </p>
0debug
Removing duplicates and printing the string in specific output in python : Well, I tried my hardest to get it done! but i end up failing this at deleting duplicates at listing them. Can you guys tell me how to do this one or any clue? Here's the output I want to get: Please enter sentence: Hello Python! ' ' 1 '!' 1 'H' 1 'P' 1 'e' 1 'h' 1 'l' 2 'n' 1 'o' 2 't' 1 'y' 1 [' ', '!', 'H', 'P', 'e', 'h', 'l', 'n', 'o', 't', 'y'] This is the code I tried: from collections import OrderedDict def rmdup(str1): return "".join(OrderedDict.fromkeys(str1)) str = input('Please enter sentence: ') sorted-str = sorted(str) str1 = [] for i in range(len(str)): j = sorted-str.count(sorted-str[i]) str1 = list(rmdup(str1)) print('%r' % sorted-str + '\t' + '%r' % j) print(str1) and here's what the output I get: Please enter sentence: Hello Python! ' ' 1 '!' 1 'H' 1 'P' 1 'e' 1 'h' 1 'l' 2 'l' 2 'n' 1 'o' 2 'o' 2 't' 1 'y' 1 [' ', '!', 'H', 'P', 'e', 'h', 'l', 'n', 'o', 't', 'y']
0debug
ogg_get_length (AVFormatContext * s) { ogg_t *ogg = s->priv_data; int idx = -1, i; offset_t size, end; if(s->pb.is_streamed) return 0; if (s->duration != AV_NOPTS_VALUE) return 0; size = url_fsize(&s->pb); if(size < 0) return 0; end = size > MAX_PAGE_SIZE? size - MAX_PAGE_SIZE: size; ogg_save (s); url_fseek (&s->pb, end, SEEK_SET); while (!ogg_read_page (s, &i)){ if (ogg->streams[i].granule != -1 && ogg->streams[i].granule != 0) idx = i; } if (idx != -1){ s->streams[idx]->duration = ogg_gptopts (s, idx, ogg->streams[idx].granule); } ogg->size = size; ogg_restore (s, 0); ogg_save (s); while (!ogg_read_page (s, &i)) { if (i == idx && ogg->streams[i].granule != -1 && ogg->streams[i].granule != 0) break; } if (i == idx) { s->streams[idx]->start_time = ogg_gptopts (s, idx, ogg->streams[idx].granule); s->streams[idx]->duration -= s->streams[idx]->start_time; } ogg_restore (s, 0); return 0; }
1threat
how do I Include partial html files into index.html? : <p>I'm using python &amp; django and tying to include html files in my index.html file but cant seem to get it to work - any help will be appreciated.</p> <p>I'll add some context.. I've downloaded a theme via Keentheme and want to use it in my project. The getting started tutorial (<a href="https://www.youtube.com/watch?v=ApO_obOK_00" rel="nofollow noreferrer">https://www.youtube.com/watch?v=ApO_obOK_00</a>) at around 14:20 instructs me to change all html files to php files and to use This doesn't work.</p> <p>The html files contain numerous instructions like the following: '''[html-partial:include:{"file":"partials/_mobile-header-base.html"}]/'''</p> <p>The file location for the above is as follows: ./partials/_mobile-header-base.html</p> <p>The tutorial only walks through the php include method - can anyone help?</p>
0debug
static int ehci_state_writeback(EHCIQueue *q) { EHCIPacket *p = QTAILQ_FIRST(&q->packets); int again = 0; assert(p != NULL); assert(p->qtdaddr == q->qtdaddr); ehci_trace_qtd(q, NLPTR_GET(p->qtdaddr), (EHCIqtd *) &q->qh.next_qtd); put_dwords(q->ehci, NLPTR_GET(p->qtdaddr), (uint32_t *) &q->qh.next_qtd, sizeof(EHCIqtd) >> 2); ehci_free_packet(p); if (q->qh.token & QTD_TOKEN_HALT) { ehci_set_state(q->ehci, q->async, EST_HORIZONTALQH); again = 1; } else { ehci_set_state(q->ehci, q->async, EST_ADVANCEQUEUE); again = 1; } return again; }
1threat
CSS integration with HTML : <p>I have created a separate CSS file for a tool tip as below.</p> <pre><code>.help-tip{ position: absolute; top: 18px; right: 18px; text-align: center; background-color: #BCDBEA; border-radius: 50%; width: 24px; height: 24px; font-size: 14px; line-height: 26px; cursor: default; } .help-tip:before{ content:'?'; font-weight: bold; color:#fff; } .help-tip:hover p{ display:block; transform-origin: 100% 0%; -webkit-animation: fadeIn 0.3s ease-in-out; animation: fadeIn 0.3s ease-in-out; } .help-tip p{ display: none; text-align: left; background-color: #1E2021; padding: 20px; width: 300px; position: absolute; border-radius: 3px; box-shadow: 1px 1px 1px rgba(0, 0, 0, 0.2); right: -4px; color: #FFF; font-size: 13px; line-height: 1.4; } .help-tip p:before{ position: absolute; content: ''; width:0; height: 0; border:6px solid transparent; border-bottom-color:#1E2021; right:10px; top:-12px; } .help-tip p:after{ width:100%; height:40px; content:''; position: absolute; top:-40px; left:0; } @-webkit-keyframes fadeIn { 0% { opacity:0; transform: scale(0.6); } 100% { opacity:100%; transform: scale(1); } } @keyframes fadeIn { 0% { opacity:0; } 100% { opacity:100%; } } </code></pre> <p>From this file calling the .help-tipclass from that in html file as below. I am expecting the tool tip to be shown which is not happening as expected. Earlier I had written css code in same html file under style section but then moved css code to separate file. </p> <pre><code>&lt;div class="domtooltip_style.help-tip"&gt; &lt;p&gt;&lt;b&gt;Deal-O-Matic (DOM)&lt;/b&gt; has implemented multiple controls and safeguards for deal creation to ensure deal quality and to ensure that automatically created deals meet retail teams’ business requirements.&lt;br&gt; &lt;b&gt;Contribution Profit (CP) Check&lt;/b&gt;: DOM ensures that Deals created automatically are CP positive through the duration of the deal and ends deals that turned CP negative.&lt;/p&gt; &lt;/div&gt; </code></pre> <p>Appreciate if someone can correct me to call class from CSS file in HTML file. </p>
0debug
Make This Java code more efficient : <p><em>This Code is work but i got error at last after the execution of my code i.e Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1 at AddNum.main(AddNum.java:24), can any one fix this issue or make it more efficient to use.</em></p> <pre><code>import java.util.Scanner; public class AddNum { public static void main(String args[]) { int num1,num2,sum,i=0; int[] arr =new int[5]; System.out.println("Enter Any two number to Add and See magic"); Scanner scan = new Scanner(System.in); num1 = scan.nextInt(); num2 = scan.nextInt(); sum = num1 + num2; System.out.println("The Sum of "+num1+" and "+num2+" is = "+sum ); do { arr[i++]=sum%10; sum/=10; }while(sum&gt;0); for(int p=i;p&gt;=0;p--) { mag(arr[p-1]); } } public static void mag(int sum) { switch(sum){ case 0: System.out.println(" 00000"); System.out.println(" 00 00"); System.out.println(" 000 000"); System.out.println(" 000 000"); System.out.println(" 000 000"); System.out.println(" 000 000"); System.out.println(" 000 000"); System.out.println(" 00 00"); System.out.println(" 00000"); break; case 1: System.out.println(" 11"); System.out.println(" 111"); System.out.println(" 1 111"); System.out.println(" 111"); System.out.println(" 111"); System.out.println(" 111"); System.out.println(" 111"); System.out.println(" 111"); System.out.println(" 11111"); break; case 2: System.out.println(" 2222"); System.out.println(" 22 22"); System.out.println(" 2 222"); System.out.println(" 22"); System.out.println(" 22"); System.out.println(" 22"); System.out.println(" 222 2"); System.out.println(" 222 22"); System.out.println(" 222222222"); break; case 3: System.out.println(" 3333"); System.out.println(" 33 333"); System.out.println(" 3 333"); System.out.println(" 33"); System.out.println(" 333"); System.out.println(" 33"); System.out.println(" 3 333"); System.out.println(" 33 333"); System.out.println(" 3333"); break; case 4: System.out.println(" 44"); System.out.println(" 4444"); System.out.println(" 4 444"); System.out.println(" 4 444 "); System.out.println(" 4 444"); System.out.println(" 4444444444"); System.out.println(" 444 "); System.out.println(" 444"); System.out.println(" 44444"); break; case 5: System.out.println(" 5555555"); System.out.println(" 555 5"); System.out.println(" 555"); System.out.println(" 555"); System.out.println(" 555555"); System.out.println(" 555"); System.out.println(" 5 555"); System.out.println(" 55 555"); System.out.println(" 5555"); break; case 6: System.out.println(" 66666"); System.out.println(" 66 66 "); System.out.println(" 666 6"); System.out.println(" 666"); System.out.println(" 6666666"); System.out.println(" 666 666"); System.out.println(" 666 666"); System.out.println(" 66 666"); System.out.println(" 66666"); break; case 7: System.out.println(" 777777777"); System.out.println(" 77 777"); System.out.println(" 7 777"); System.out.println(" 77"); System.out.println(" 77"); System.out.println(" 77"); System.out.println(" 777"); System.out.println(" 777"); System.out.println(" 777"); break; case 8: System.out.println(" 888888"); System.out.println(" 888 888"); System.out.println(" 888 888"); System.out.println(" 888888"); System.out.println(" 888 888"); System.out.println(" 888 888"); System.out.println(" 888 888"); System.out.println(" 888 888"); System.out.println(" 8888"); break; case 9: System.out.println(" 99999"); System.out.println(" 999 999"); System.out.println(" 999 999"); System.out.println(" 999 999"); System.out.println(" 9999999"); System.out.println(" 999"); System.out.println(" 9 999"); System.out.println(" 99 999"); System.out.println(" 9999"); break; } } } </code></pre>
0debug
How do I randomly apply CSS style using plain Javascript? : <p>I have a few list entries that I would like to randomly highlight, 1 item only with different background and text color.</p> <p>As shown below with "list 2" highlighted.</p> <pre><code>&lt;div id="entries"&gt; &lt;ul style="list-style-type: none;"&gt; &lt;li&gt;&lt;a href="#1"&gt;list 1&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="#2" style="color:#fff; background-color:#000"&gt;list 2&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="#3"&gt;list 3&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="#4"&gt;list 4&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="#5"&gt;list 5&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; </code></pre> <p>How can I achieve this using vanilla javascript without the use of jQuery?</p>
0debug
Inherit ES6/TS class from non-class : <p>Given the class is extended from non-class (including, but not limited to, function),</p> <pre><code>function Fn() {} class Class extends Fn { constructor() { super(); } } </code></pre> <p>what are the the consequences? What do the specs say on that?</p> <p>It looks like the current implementations of Babel, Google V8 and Mozilla Spidermonkey are ok with that, and TypeScript throws</p> <blockquote> <p>Type '() => void' is not a constructor function type</p> </blockquote> <p>If this is a valid ES2015 code, what's the proper way to handle it in TypeScript?</p>
0debug
Javascript Combination Method : <p>i need help to build function that gets two params: "chars", "combinationLength" example: </p> <pre><code> var chars = [1,2,3,4,5,6]; //can be also strings var combinationLength = 3; generateCombinations(chars, combinationLength){ } </code></pre> <p>the output should be: 111 112 113 114 115 116 121 122 etc...</p> <p>it means to take all the chars and create combinations, hope i was clearly ;]</p>
0debug
static void usb_uas_command(UASDevice *uas, uas_ui *ui) { UASRequest *req; uint32_t len; uint16_t tag = be16_to_cpu(ui->hdr.tag); if (uas_using_streams(uas) && tag > UAS_MAX_STREAMS) { goto invalid_tag; } req = usb_uas_find_request(uas, tag); if (req) { goto overlapped_tag; } req = usb_uas_alloc_request(uas, ui); if (req->dev == NULL) { goto bad_target; } trace_usb_uas_command(uas->dev.addr, req->tag, usb_uas_get_lun(req->lun), req->lun >> 32, req->lun & 0xffffffff); QTAILQ_INSERT_TAIL(&uas->requests, req, next); if (uas_using_streams(uas) && uas->data3[req->tag] != NULL) { req->data = uas->data3[req->tag]; req->data_async = true; uas->data3[req->tag] = NULL; } req->req = scsi_req_new(req->dev, req->tag, usb_uas_get_lun(req->lun), ui->command.cdb, req); if (uas->requestlog) { scsi_req_print(req->req); } len = scsi_req_enqueue(req->req); if (len) { req->data_size = len; scsi_req_continue(req->req); } overlapped_tag: usb_uas_queue_fake_sense(uas, tag, sense_code_OVERLAPPED_COMMANDS); bad_target: usb_uas_queue_fake_sense(uas, tag, sense_code_LUN_NOT_SUPPORTED); g_free(req); }
1threat
gen_intermediate_code_internal(MoxieCPU *cpu, TranslationBlock *tb, bool search_pc) { CPUState *cs = CPU(cpu); DisasContext ctx; target_ulong pc_start; uint16_t *gen_opc_end; CPUBreakpoint *bp; int j, lj = -1; CPUMoxieState *env = &cpu->env; int num_insns; pc_start = tb->pc; gen_opc_end = tcg_ctx.gen_opc_buf + OPC_MAX_SIZE; ctx.pc = pc_start; ctx.saved_pc = -1; ctx.tb = tb; ctx.memidx = 0; ctx.singlestep_enabled = 0; ctx.bstate = BS_NONE; num_insns = 0; gen_tb_start(); do { if (unlikely(!QTAILQ_EMPTY(&cs->breakpoints))) { QTAILQ_FOREACH(bp, &cs->breakpoints, entry) { if (ctx.pc == bp->pc) { tcg_gen_movi_i32(cpu_pc, ctx.pc); gen_helper_debug(cpu_env); ctx.bstate = BS_EXCP; goto done_generating; } } } if (search_pc) { j = tcg_ctx.gen_opc_ptr - tcg_ctx.gen_opc_buf; if (lj < j) { lj++; while (lj < j) { tcg_ctx.gen_opc_instr_start[lj++] = 0; } } tcg_ctx.gen_opc_pc[lj] = ctx.pc; tcg_ctx.gen_opc_instr_start[lj] = 1; tcg_ctx.gen_opc_icount[lj] = num_insns; } ctx.opcode = cpu_lduw_code(env, ctx.pc); ctx.pc += decode_opc(cpu, &ctx); num_insns++; if (cs->singlestep_enabled) { break; } if ((ctx.pc & (TARGET_PAGE_SIZE - 1)) == 0) { break; } } while (ctx.bstate == BS_NONE && tcg_ctx.gen_opc_ptr < gen_opc_end); if (cs->singlestep_enabled) { tcg_gen_movi_tl(cpu_pc, ctx.pc); gen_helper_debug(cpu_env); } else { switch (ctx.bstate) { case BS_STOP: case BS_NONE: gen_goto_tb(env, &ctx, 0, ctx.pc); break; case BS_EXCP: tcg_gen_exit_tb(0); break; case BS_BRANCH: default: break; } } done_generating: gen_tb_end(tb, num_insns); *tcg_ctx.gen_opc_ptr = INDEX_op_end; if (search_pc) { j = tcg_ctx.gen_opc_ptr - tcg_ctx.gen_opc_buf; lj++; while (lj <= j) { tcg_ctx.gen_opc_instr_start[lj++] = 0; } } else { tb->size = ctx.pc - pc_start; tb->icount = num_insns; } }
1threat
static int raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo) { BDRVRawState *s = bs->opaque; if (s->offset || s->has_size) { return -ENOTSUP; } return bdrv_probe_geometry(bs->file->bs, geo); }
1threat
Create a JSON field according to the value of a variable : <p>Supposing we have JSON data who came as :</p> <pre><code>msg = { fieldName: fieldA }; msg = { fieldName: fieldB }; </code></pre> <p>I can receive multiple of this message, but everytime i have a different field, i want it to add in the 'data' json object, as i can access by using this :</p> <pre><code>data.fieldA data.fieldB </code></pre> <p>By which way can i perform this ?</p>
0debug
How to remove rows of data in .csv files in R? : <p>Basically I have a .csv file that has a column of data where I only want the rows that have that given column with a certain value. Is there a way to do this?</p> <p>`</p>
0debug
Can we open other app through this code by executing command given through the Edittext view. If yes then how it can be done? : package com.example.honey.shell; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; import java.io.BufferedReader; import java.io.InputStreamReader; public class MainActivity extends AppCompatActivity { TextView op; EditText ip; Button exec; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); op = (TextView) findViewById(R.id.textView); ip = (EditText) findViewById(R.id.editText); exec = (Button) findViewById(R.id.button); } public void execute(View view) { String input = ip.getText().toString(); StringBuffer output = new StringBuffer(); Process p; try { p = Runtime.getRuntime().exec(input); p.waitFor(); BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream())); String line = ""; while ((line = reader.readLine()) != null) { output.append(line + "\n"); } } catch (Exception e) {} op.setText(output.toString()); } } Actually i wish to open some other application from my app by executing the command but it's not working,as I tried commands such as cd but it didn't work only the ls command is working in it...
0debug
static av_cold int vaapi_encode_h265_init_constant_bitrate(AVCodecContext *avctx) { VAAPIEncodeContext *ctx = avctx->priv_data; VAAPIEncodeH265Context *priv = ctx->priv_data; int hrd_buffer_size; int hrd_initial_buffer_fullness; if (avctx->bit_rate > INT32_MAX) { av_log(avctx, AV_LOG_ERROR, "Target bitrate of 2^31 bps or " "higher is not supported.\n"); return AVERROR(EINVAL); } if (avctx->rc_buffer_size) hrd_buffer_size = avctx->rc_buffer_size; else hrd_buffer_size = avctx->bit_rate; if (avctx->rc_initial_buffer_occupancy) hrd_initial_buffer_fullness = avctx->rc_initial_buffer_occupancy; else hrd_initial_buffer_fullness = hrd_buffer_size * 3 / 4; priv->rc_params.misc.type = VAEncMiscParameterTypeRateControl; priv->rc_params.rc = (VAEncMiscParameterRateControl) { .bits_per_second = avctx->bit_rate, .target_percentage = 66, .window_size = 1000, .initial_qp = (avctx->qmax >= 0 ? avctx->qmax : 40), .min_qp = (avctx->qmin >= 0 ? avctx->qmin : 20), .basic_unit_size = 0, }; ctx->global_params[ctx->nb_global_params] = &priv->rc_params.misc; ctx->global_params_size[ctx->nb_global_params++] = sizeof(priv->rc_params); priv->hrd_params.misc.type = VAEncMiscParameterTypeHRD; priv->hrd_params.hrd = (VAEncMiscParameterHRD) { .initial_buffer_fullness = hrd_initial_buffer_fullness, .buffer_size = hrd_buffer_size, }; ctx->global_params[ctx->nb_global_params] = &priv->hrd_params.misc; ctx->global_params_size[ctx->nb_global_params++] = sizeof(priv->hrd_params); priv->fixed_qp_idr = 30; priv->fixed_qp_p = 30; priv->fixed_qp_b = 30; av_log(avctx, AV_LOG_DEBUG, "Using constant-bitrate = %"PRId64" bps.\n", avctx->bit_rate); return 0; }
1threat
int css_do_rchp(uint8_t cssid, uint8_t chpid) { uint8_t real_cssid; if (cssid > channel_subsys.max_cssid) { return -EINVAL; } if (channel_subsys.max_cssid == 0) { real_cssid = channel_subsys.default_cssid; } else { real_cssid = cssid; } if (!channel_subsys.css[real_cssid]) { return -EINVAL; } if (!channel_subsys.css[real_cssid]->chpids[chpid].in_use) { return -ENODEV; } if (!channel_subsys.css[real_cssid]->chpids[chpid].is_virtual) { fprintf(stderr, "rchp unsupported for non-virtual chpid %x.%02x!\n", real_cssid, chpid); return -ENODEV; } css_queue_crw(CRW_RSC_CHP, CRW_ERC_INIT, channel_subsys.max_cssid > 0 ? 1 : 0, chpid); if (channel_subsys.max_cssid > 0) { css_queue_crw(CRW_RSC_CHP, CRW_ERC_INIT, 0, real_cssid << 8); } return 0; }
1threat
How to get intersection with lodash? : <p>I am trying to return the matching ids in this array of objects:</p> <pre><code>const arr = [{id:1,name:'Harry'},{id:2,name:'Bert'}] const arr2 =["1"] </code></pre> <p>How can I return just the id with value 1 in arr?</p>
0debug
Oauth2, scopes and user roles : <p>I am asking a question conceptually here as I am trying to understand the relationship between scopes and user roles in an OAuth2 based system. </p> <p>As I am implementing an API, I want to restrict access to specific resources by using scopes on the resources. I understand the use of access tokens to request resources, and I believe my understanding to be correct in that you specify your scope(s) when requesting the access token.</p> <p>What I am not entirely sure of is how restriction of scopes would work based on specific roles that an authenticated user is in. Let's assume Bob is an admin and Sue is a regular user. We have some resources protected by an <em>is_admin</em> scope. What stops Sue from requesting (and receiving) <em>is_admin</em> scope in her access token?</p> <p>I am <em>thinking</em> that what should happen is the following:</p> <ul> <li>Bob authenticates.</li> <li>Bob's roles are looked up after his authentication is complete. His "admin" role has the "is_admin" scope attached.</li> <li>Bob asks for an access token with all the scopes collected from his various roles</li> <li>Bob is automatically given those scopes for his access token</li> </ul> <p>Is it up to my calling app to enforce only sending asking for the scope Bobs needs? Or is there something I am missing with regards to scopes?</p> <p>Can someone please enlighten me with some simple examples?</p>
0debug
Accessing a values from : I have the following dictionary Dict = {'Manu':{u'ID0020879.07': [{'ID': u'ID0020879.07', 'log': u'log-123-56', 'owner': [Manu], 'item': u'WRAITH', 'main_id': 5013L, 'status': u'noticed', 'serial': u'89980'}]} How can I access the serial from this dictionary? I tried `Dict[Manu]['serial']`, But its not working as expected.. Guys any idea?
0debug
tsql similar Id in same column : I have a table like that and I need result just UserId 11, because UserId 12 is no lessonId 103 but still getting result. <code> SELECT * from LessonList where (LessonId = 102 and LessonValue = 1002) or (LessonId = 103 and LessonValue = 1003) or (LessonId = 102 and LessonValue = 1008) </code> <code> Id UserId LessonId LessonValue 1 11 102 1002 2 11 103 1003 3 12 102 1008 </code> I need result like that? <code> Id UserId LessonId LessonValue 1 11 102 1002 2 11 103 1003 </code> thanks
0debug
How to define optional constructor arguments with defaults in Typescript : <p>Is it possible to have optional constructor arguments with default value, like this</p> <pre><code>export class Test { constructor(private foo?: string="foo", private bar?: string="bar") {} } </code></pre> <p>This gives me the following error: </p> <p>Parameter cannot have question mark and initializer.</p> <p>I would like to create instances like </p> <pre><code>x = new Test(); // x.foo === 'foo' x = new Test('foo1'); // x.foo === 'foo1' x = new Test('foo1', 'bar1'); </code></pre> <p>What is the correct typescript way to achieve this?</p>
0debug
Iterate over objects of the array : <p>I've the below array of objects. How do I iterate over it to change <code>inventory</code> and <code>unit_price</code> if product <code>name</code> is found, and create new product if the <code>name</code> is no found. for example, if in <code>my_product</code> the name is <code>stool</code> as shown, this record to be added to the array, but if the <code>name</code> is, let's say <code>table</code> then the <code>inventory</code> and <code>unit_price</code> of product <code>table</code> are required to be adjusted.</p> <pre><code>let products = [ { name: "chair", inventory: 5, unit_price: 45.99 }, { name: "table", inventory: 10, unit_price: 123.75 }, { name: "sofa", inventory: 2, unit_price: 399.50 } ]; let my_product = {name: "stool", inventory: 1, unit_price: 300} </code></pre>
0debug
static GSList *gd_vc_init(GtkDisplayState *s, VirtualConsole *vc, int index, GSList *group, GtkWidget *view_menu) { const char *label; char buffer[32]; char path[32]; #if VTE_CHECK_VERSION(0, 26, 0) VtePty *pty; #endif GIOChannel *chan; GtkWidget *scrolled_window; GtkAdjustment *vadjustment; int master_fd, slave_fd; snprintf(buffer, sizeof(buffer), "vc%d", index); snprintf(path, sizeof(path), "<QEMU>/View/VC%d", index); vc->chr = vcs[index]; if (vc->chr->label) { label = vc->chr->label; } else { label = buffer; } vc->menu_item = gtk_radio_menu_item_new_with_mnemonic(group, label); group = gtk_radio_menu_item_get_group(GTK_RADIO_MENU_ITEM(vc->menu_item)); gtk_menu_item_set_accel_path(GTK_MENU_ITEM(vc->menu_item), path); gtk_accel_map_add_entry(path, GDK_KEY_2 + index, GDK_CONTROL_MASK | GDK_MOD1_MASK); vc->terminal = vte_terminal_new(); master_fd = qemu_openpty_raw(&slave_fd, NULL); g_assert(master_fd != -1); #if VTE_CHECK_VERSION(0, 26, 0) pty = vte_pty_new_foreign(master_fd, NULL); vte_terminal_set_pty_object(VTE_TERMINAL(vc->terminal), pty); #else vte_terminal_set_pty(VTE_TERMINAL(vc->terminal), master_fd); #endif vte_terminal_set_scrollback_lines(VTE_TERMINAL(vc->terminal), -1); vadjustment = vte_terminal_get_adjustment(VTE_TERMINAL(vc->terminal)); scrolled_window = gtk_scrolled_window_new(NULL, vadjustment); gtk_container_add(GTK_CONTAINER(scrolled_window), vc->terminal); vte_terminal_set_size(VTE_TERMINAL(vc->terminal), 80, 25); vc->fd = slave_fd; vc->chr->opaque = vc; vc->scrolled_window = scrolled_window; gtk_scrolled_window_set_policy(GTK_SCROLLED_WINDOW(vc->scrolled_window), GTK_POLICY_AUTOMATIC, GTK_POLICY_AUTOMATIC); gtk_notebook_append_page(GTK_NOTEBOOK(s->notebook), scrolled_window, gtk_label_new(label)); g_signal_connect(vc->menu_item, "activate", G_CALLBACK(gd_menu_switch_vc), s); gtk_menu_shell_append(GTK_MENU_SHELL(view_menu), vc->menu_item); qemu_chr_be_generic_open(vc->chr); if (vc->chr->init) { vc->chr->init(vc->chr); } chan = g_io_channel_unix_new(vc->fd); g_io_add_watch(chan, G_IO_IN, gd_vc_in, vc); return group; }
1threat
static int buffered_put_buffer(void *opaque, const uint8_t *buf, int64_t pos, int size) { QEMUFileBuffered *s = opaque; ssize_t error; DPRINTF("putting %d bytes at %" PRId64 "\n", size, pos); error = qemu_file_get_error(s->file); if (error) { DPRINTF("flush when error, bailing: %s\n", strerror(-error)); return error; } if (size <= 0) { return size; } if (size > (s->buffer_capacity - s->buffer_size)) { DPRINTF("increasing buffer capacity from %zu by %zu\n", s->buffer_capacity, size + 1024); s->buffer_capacity += size + 1024; s->buffer = g_realloc(s->buffer, s->buffer_capacity); } memcpy(s->buffer + s->buffer_size, buf, size); s->buffer_size += size; return size; }
1threat
Check if the array has three consecutive numbers in sequence : <p>i have an array like below </p> <p><code>[1,2,'b',4 ,'a','b',5,'o',7,1,3,'p',9,'p']</code></p> <p>I want to check that if the above array has three consecutive numbers in a sequence (i.e) <code>[1,2,3]</code>.</p> <p>From the above array i want the output as below given example</p> <p><code>[7,1,3]</code> - > Since this sequence is occuring in a sequence without getting blocked by a alphabet.</p>
0debug
Prawn::Errors::IncompatibleStringEncoding: Your document includes text that's not compatible with the Windows-1252 character set : <p>Below is my Prawn PDF file to generate a name on the PDF - </p> <pre><code>def initialize(opportunity_application) pdf = Prawn::Document.new(:page_size =&gt; [1536, 2048], :page_layout =&gt; :landscape) cell_1 = pdf.make_cell(content: "Eylül Çamcı".force_encoding('iso-8859-1').encode('utf-8'), borders: [], size: 66, :text_color =&gt; "000000", padding: [0,0,0,700], font: "app/assets/fonts/opensans.ttf") t = pdf.make_table [[cell_1]] t.draw pdf.render_file "tmp/mos_certificates/application_test.pdf" end </code></pre> <p>When rendering the name Eylül Çamcı which is Turkish, I get the following error - </p> <pre><code>Prawn::Errors::IncompatibleStringEncoding: Your document includes text that's not compatible with the Windows-1252 character set. If you need full UTF-8 support, use TTF fonts instead of PDF's built-in fonts. </code></pre> <p>I'm already using a TTF font that supports the characters in that name, what can I do to print the name correctly?</p>
0debug
static int fraps2_decode_plane(FrapsContext *s, uint8_t *dst, int stride, int w, int h, const uint8_t *src, int size, int Uoff, const int step) { int i, j; GetBitContext gb; VLC vlc; Node nodes[512]; for(i = 0; i < 256; i++) nodes[i].count = bytestream_get_le32(&src); size -= 1024; if (ff_huff_build_tree(s->avctx, &vlc, 256, nodes, huff_cmp, FF_HUFFMAN_FLAG_ZERO_COUNT) < 0) return -1; s->dsp.bswap_buf((uint32_t *)s->tmpbuf, (const uint32_t *)src, size >> 2); init_get_bits(&gb, s->tmpbuf, size * 8); for(j = 0; j < h; j++){ for(i = 0; i < w*step; i += step){ dst[i] = get_vlc2(&gb, vlc.table, 9, 3); if(j) dst[i] += dst[i - stride]; else if(Uoff) dst[i] += 0x80; } dst += stride; if(get_bits_left(&gb) < 0){ free_vlc(&vlc); return -1; } } free_vlc(&vlc); return 0; }
1threat
static int aac_decode_frame(AVCodecContext *avctx, void *data, int *got_frame_ptr, AVPacket *avpkt) { AACContext *ac = avctx->priv_data; const uint8_t *buf = avpkt->data; int buf_size = avpkt->size; GetBitContext gb; int buf_consumed; int buf_offset; int err; int new_extradata_size; const uint8_t *new_extradata = av_packet_get_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, &new_extradata_size); int jp_dualmono_size; const uint8_t *jp_dualmono = av_packet_get_side_data(avpkt, AV_PKT_DATA_JP_DUALMONO, &jp_dualmono_size); if (new_extradata && 0) { av_free(avctx->extradata); avctx->extradata = av_mallocz(new_extradata_size + FF_INPUT_BUFFER_PADDING_SIZE); if (!avctx->extradata) return AVERROR(ENOMEM); avctx->extradata_size = new_extradata_size; memcpy(avctx->extradata, new_extradata, new_extradata_size); push_output_configuration(ac); if (decode_audio_specific_config(ac, ac->avctx, &ac->oc[1].m4ac, avctx->extradata, avctx->extradata_size*8, 1) < 0) { pop_output_configuration(ac); } } ac->dmono_mode = 0; if (jp_dualmono && jp_dualmono_size > 0) ac->dmono_mode = 1 + *jp_dualmono; if (ac->force_dmono_mode >= 0) ac->dmono_mode = ac->force_dmono_mode; init_get_bits(&gb, buf, buf_size * 8); if ((err = aac_decode_frame_int(avctx, data, got_frame_ptr, &gb, avpkt)) < 0) return err; buf_consumed = (get_bits_count(&gb) + 7) >> 3; for (buf_offset = buf_consumed; buf_offset < buf_size; buf_offset++) if (buf[buf_offset]) break; return buf_size > buf_offset ? buf_consumed : buf_size; }
1threat
Declare objects in every method? : <p>So I'm new to java, but I'm fluent in python, and I'm stuck on a basic problem. Do I have to declare an object in the same method in which I'm going to use it? Or is there a way to transfer objects from method to method? Thank you for your help (:</p>
0debug
I want to delete duplicate rows using this query only : **delete from employee where ( select * from (select row_number() over (partition by id) rn from employee) alias) > 1;** The above query is not working and giving this error message: **Error Code: 1242. Subquery returns more than 1 row**
0debug
Deep neural network skip connection implemented as summation vs concatenation? : <p>In deep neural network, we can implement the skip connections to help:</p> <ul> <li><p>Solve problem of vanishing gradient, training faster</p></li> <li><p>The network learns a combination of low level and high level features</p></li> <li><p>Recover info loss during downsampling like max pooling.</p></li> </ul> <p><a href="https://medium.com/@mikeliao/deep-layer-aggregation-combining-layers-in-nn-architectures-2744d29cab8" rel="noreferrer">https://medium.com/@mikeliao/deep-layer-aggregation-combining-layers-in-nn-architectures-2744d29cab8</a></p> <p>However, i read some source code, some implemented skip connections as concatenation, some as summation. So my question is what are the benefits of each of these implementations?</p>
0debug
void do_blockdev_backup(const char *job_id, const char *device, const char *target, enum MirrorSyncMode sync, bool has_speed, int64_t speed, bool has_on_source_error, BlockdevOnError on_source_error, bool has_on_target_error, BlockdevOnError on_target_error, BlockJobTxn *txn, Error **errp) { BlockBackend *blk; BlockDriverState *bs; BlockDriverState *target_bs; Error *local_err = NULL; AioContext *aio_context; if (!has_speed) { speed = 0; } if (!has_on_source_error) { on_source_error = BLOCKDEV_ON_ERROR_REPORT; } if (!has_on_target_error) { on_target_error = BLOCKDEV_ON_ERROR_REPORT; } blk = blk_by_name(device); if (!blk) { error_setg(errp, "Device '%s' not found", device); return; } aio_context = blk_get_aio_context(blk); aio_context_acquire(aio_context); if (!blk_is_available(blk)) { error_setg(errp, "Device '%s' has no medium", device); goto out; } bs = blk_bs(blk); target_bs = bdrv_lookup_bs(target, target, errp); if (!target_bs) { goto out; } if (bdrv_get_aio_context(target_bs) != aio_context) { if (!bdrv_has_blk(target_bs)) { bdrv_set_aio_context(target_bs, aio_context); } else { error_setg(errp, "Target is attached to a different thread from " "source."); goto out; } } backup_start(job_id, bs, target_bs, speed, sync, NULL, on_source_error, on_target_error, block_job_cb, bs, txn, &local_err); if (local_err != NULL) { error_propagate(errp, local_err); } out: aio_context_release(aio_context); }
1threat
Relationship between event loop,libuv and v8 engine : <p>I am learning through the architecture of Node.js. I have following questions.</p> <ol> <li>Is event loop a part of libuv or v8?</li> <li>Is event queue a part of event loop? are event queue generated by libuv or v8 engine or event loop itself?</li> <li>What is the connection between libuv and v8 engine?</li> <li>If event loop is single threaded, does libuv come into picture to create multiple threads to handle File I/O?</li> <li>Does browsers have event loop mechanism or just Node.js does?</li> </ol>
0debug
ElementTree TypeError "write() argument must be str, not bytes" in Python3 : <p>Got a Problem with generating a .SVG File with Python3 and ElementTree.</p> <pre><code> from xml.etree import ElementTree as et doc = et.Element('svg', width='480', height='360', version='1.1', xmlns='http://www.w3.org/2000/svg') #Doing things with et and doc f = open('sample.svg', 'w') f.write('&lt;?xml version=\"1.0\" standalone=\"no\"?&gt;\n') f.write('&lt;!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n') f.write('\"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\"&gt;\n') f.write(et.tostring(doc)) f.close() </code></pre> <p>The Function et.tostring(doc) generates the TypeError "write() argument must be str, not bytes". I don't understand that behavior, "et" should convert the ElementTree-Element into a string? It works in python2, but not in python3. What did i do wrong?</p>
0debug
Module compiled with Swift 4.0 cannot be imported in Swift 4.0.1 : <p>However I have recompiled the framework using the same Xcode and it still gives me this error.</p> <ul> <li>Base SDK iOS 11.1 for both </li> <li>Swift Language Version Swift 4.0 for both</li> <li>Not using Pods/Carthage</li> </ul> <p>I hope someone might know</p>
0debug
toString method of a linear equation : <p>I am trying to represent an equation in this format: "a = bx + c"</p> <ul> <li>If b is 0, it should return “a = c”.</li> <li>If c is 0, then it should return “a = bx”</li> <li>Also, when c is negative it should not return something like "5 = 8x + -7" </li> <li>And when b=1, it should not show the coefficient of x.</li> </ul> <p>Can you help me?</p>
0debug
Why do I get None as a result : <p>I have the following script:</p> <pre><code>import math scores = [3.0,1.0,0.1] sum = 0 i=0 j=0 for s in scores: sum = sum + math.exp(scores[i]) i=i+1 def myFunction(x): math.exp(x)/sum for s2 in scores: print(myFunction(scores[j])) j=j+1 </code></pre> <p>But, the output I get is:</p> <pre><code>None None None </code></pre> <p>Why is that? How can I retrieve the correct values?</p> <p>Thanks.</p>
0debug
How stop keys from spamming? : <p>I'm currently struggling with the windows "Inputs". I would like to ask if there is a way to stop keys from spamming.</p> <p>In fact I'm using this keyword: case WM_KEYDOWN: // do some stuff</p> <p>All the time I press the button it repeats firing functions that depend on the pressing. But I would like it to have waiting until it's released once to be able to be firing once again.</p> <p>Is there a keyword that checks that for me or do I have to hardcode that? ( searching on the net didn't help me out :/ )</p>
0debug
How to add JNI (C/C++ native code) to existing Android Studio project : <p>Like the title says - how to add native code to existing Android Studio project, without breaking the current project, including gradle and proguard settings?</p>
0debug
static void dct_unquantize_mpeg2_mmx(MpegEncContext *s, DCTELEM *block, int n, int qscale) { int nCoeffs; const UINT16 *quant_matrix; if(s->alternate_scan) nCoeffs= 64; else nCoeffs= nCoeffs= zigzag_end[ s->block_last_index[n] ]; if (s->mb_intra) { int block0; if (n < 4) block0 = block[0] * s->y_dc_scale; else block0 = block[0] * s->c_dc_scale; quant_matrix = s->intra_matrix; asm volatile( "pcmpeqw %%mm7, %%mm7 \n\t" "psrlw $15, %%mm7 \n\t" "movd %2, %%mm6 \n\t" "packssdw %%mm6, %%mm6 \n\t" "packssdw %%mm6, %%mm6 \n\t" "movl %3, %%eax \n\t" ".balign 16\n\t" "1: \n\t" "movq (%0, %%eax), %%mm0 \n\t" "movq 8(%0, %%eax), %%mm1 \n\t" "movq (%1, %%eax), %%mm4 \n\t" "movq 8(%1, %%eax), %%mm5 \n\t" "pmullw %%mm6, %%mm4 \n\t" "pmullw %%mm6, %%mm5 \n\t" "pxor %%mm2, %%mm2 \n\t" "pxor %%mm3, %%mm3 \n\t" "pcmpgtw %%mm0, %%mm2 \n\t" "pcmpgtw %%mm1, %%mm3 \n\t" "pxor %%mm2, %%mm0 \n\t" "pxor %%mm3, %%mm1 \n\t" "psubw %%mm2, %%mm0 \n\t" "psubw %%mm3, %%mm1 \n\t" "pmullw %%mm4, %%mm0 \n\t" *q "pmullw %%mm5, %%mm1 \n\t" *q "pxor %%mm4, %%mm4 \n\t" "pxor %%mm5, %%mm5 \n\t" "pcmpeqw (%0, %%eax), %%mm4 \n\t" "pcmpeqw 8(%0, %%eax), %%mm5 \n\t" "psraw $3, %%mm0 \n\t" "psraw $3, %%mm1 \n\t" "pxor %%mm2, %%mm0 \n\t" "pxor %%mm3, %%mm1 \n\t" "psubw %%mm2, %%mm0 \n\t" "psubw %%mm3, %%mm1 \n\t" "pandn %%mm0, %%mm4 \n\t" "pandn %%mm1, %%mm5 \n\t" "movq %%mm4, (%0, %%eax) \n\t" "movq %%mm5, 8(%0, %%eax) \n\t" "addl $16, %%eax \n\t" "js 1b \n\t" ::"r" (block+nCoeffs), "r"(quant_matrix+nCoeffs), "g" (qscale), "g" (-2*nCoeffs) : "%eax", "memory" ); block[0]= block0; } else { quant_matrix = s->non_intra_matrix; asm volatile( "pcmpeqw %%mm7, %%mm7 \n\t" "psrlq $48, %%mm7 \n\t" "movd %2, %%mm6 \n\t" "packssdw %%mm6, %%mm6 \n\t" "packssdw %%mm6, %%mm6 \n\t" "movl %3, %%eax \n\t" ".balign 16\n\t" "1: \n\t" "movq (%0, %%eax), %%mm0 \n\t" "movq 8(%0, %%eax), %%mm1 \n\t" "movq (%1, %%eax), %%mm4 \n\t" "movq 8(%1, %%eax), %%mm5 \n\t" "pmullw %%mm6, %%mm4 \n\t" "pmullw %%mm6, %%mm5 \n\t" "pxor %%mm2, %%mm2 \n\t" "pxor %%mm3, %%mm3 \n\t" "pcmpgtw %%mm0, %%mm2 \n\t" "pcmpgtw %%mm1, %%mm3 \n\t" "pxor %%mm2, %%mm0 \n\t" "pxor %%mm3, %%mm1 \n\t" "psubw %%mm2, %%mm0 \n\t" "psubw %%mm3, %%mm1 \n\t" "paddw %%mm0, %%mm0 \n\t" *2 "paddw %%mm1, %%mm1 \n\t" *2 "pmullw %%mm4, %%mm0 \n\t" *2*q "pmullw %%mm5, %%mm1 \n\t" *2*q "paddw %%mm4, %%mm0 \n\t" "paddw %%mm5, %%mm1 \n\t" "pxor %%mm4, %%mm4 \n\t" "pxor %%mm5, %%mm5 \n\t" "pcmpeqw (%0, %%eax), %%mm4 \n\t" "pcmpeqw 8(%0, %%eax), %%mm5 \n\t" "psrlw $4, %%mm0 \n\t" "psrlw $4, %%mm1 \n\t" "pxor %%mm2, %%mm0 \n\t" "pxor %%mm3, %%mm1 \n\t" "psubw %%mm2, %%mm0 \n\t" "psubw %%mm3, %%mm1 \n\t" "pandn %%mm0, %%mm4 \n\t" "pandn %%mm1, %%mm5 \n\t" "pxor %%mm4, %%mm7 \n\t" "pxor %%mm5, %%mm7 \n\t" "movq %%mm4, (%0, %%eax) \n\t" "movq %%mm5, 8(%0, %%eax) \n\t" "addl $16, %%eax \n\t" "js 1b \n\t" "movd 124(%0, %3), %%mm0 \n\t" "movq %%mm7, %%mm6 \n\t" "psrlq $32, %%mm7 \n\t" "pxor %%mm6, %%mm7 \n\t" "movq %%mm7, %%mm6 \n\t" "psrlq $16, %%mm7 \n\t" "pxor %%mm6, %%mm7 \n\t" "pslld $31, %%mm7 \n\t" "psrlq $15, %%mm7 \n\t" "pxor %%mm7, %%mm0 \n\t" "movd %%mm0, 124(%0, %3) \n\t" ::"r" (block+nCoeffs), "r"(quant_matrix+nCoeffs), "g" (qscale), "r" (-2*nCoeffs) : "%eax", "memory" ); } }
1threat
Streaming Audio in FLAC or AMR_WB to the Google Speech API : <p>I need to run the google speech api in somewhat low bandwidth environments.</p> <p>Based on reading about best practices, it seems my best bet is to use the AMR_WB format.</p> <p>However, the following code produces no exceptions, and I get no responses in the <code>onError(t: Throwable)</code> method, but the API is not returning any values at all in the <code>onNext(value: StreamingRecognizeResponse)</code> method.</p> <p>If I change the format in <code>.setEncoding()</code> from <code>FLAC</code> or <code>AMR_WB</code> back to <code>LINEAR16</code> everything works fine.</p> <p>AudioEmitter.kt</p> <pre><code>fun start( encoding: Int = AudioFormat.ENCODING_PCM_16BIT, channel: Int = AudioFormat.CHANNEL_IN_MONO, sampleRate: Int = 16000, subscriber: (ByteString) -&gt; Unit ) </code></pre> <p>MainActivity.kt</p> <pre><code>builder.streamingConfig = StreamingRecognitionConfig.newBuilder() .setConfig(RecognitionConfig.newBuilder() .setLanguageCode("en-US") .setEncoding(RecognitionConfig.AudioEncoding.AMR_WB) .setSampleRateHertz(16000) .build()) .setInterimResults(true) .setSingleUtterance(false) .build() </code></pre>
0debug
query = 'SELECT * FROM customers WHERE email = ' + email_input
1threat